Click here to login if you're an NAE Member
Recover Your Account Information
Author: Meera Sampath and Pramod P. Khargonekar
Socially responsible automation (SRA) is a vision, concept, and framework to address the strong need to shape the future development of automation to help create a better world for people and society.
The past few decades have witnessed significant strides in the adoption and proliferation of automation spurred by technological advances in computing, sensing, networking, and communications. Breakthroughs in artificial intelligence (AI) and machine learning, which may currently be the most important general-purpose technologies (Brynjolfsson and McAfee 2017), have broadened the scope of automation beyond mechanized labor and industrial robotics to knowledge work and cognitive agents. Machines increasingly not only perform repetitive, routine tasks in predictable environments but also are being deployed to make complex judgments and solve problems that typically require human intelligence and understanding.
Manufacturing automation, in particular, has significantly affected the employment, productivity, and economic performance of companies and nations. As automation begins to impact knowledge work and the services sector, effects on the global workforce will be even more profound. Although, given the many comparative advantages that humans have, the scope of full substitution of human jobs by automation is likely to remain bounded, at least for the foreseeable future (Atkinson 2017; Autor 2015; Bughin et al. 2017), worker displacement, demand for newer skills, and the continued evolution of work-supplying organizations are inevitable as automation technology develops.
A recent report from the National Academies of Sciences, Engineering, and Medicine (NASEM 2017) discusses in depth the impact of information technology (IT) and automation on the US workforce. While automation, in conjunction with globalization, trade, and economic policies, has been a strong contributing factor to lower employment ratios and increased income inequalities over the past few decades, it is not just the technologies themselves but the choices made around them that have driven these impacts. Noting, for example, that advances in internet and communication technologies paved the way, in a manner unforeseen, for the outsourcing and offshoring of business work, the report notes that organizational decisions, power structures, and ideologies ultimately shape the outcomes of technologies for the workforce, society, and economy. And “technologists, policymakers (such as private-sector managers and public officials), and other leaders have the power to design IT and deploy it for the benefit of society, driven by a broad discussion of what impacts are desirable and a deeper understanding of how design, deployment, and policy decisions can achieve these impacts” (NASEM 2017, p. 138).
Similar sentiments are echoed in a report of the IEEE (2016) Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. -Citing the technology community’s lack of awareness and ownership of socioeconomic concerns surrounding automation, the report urges all those “involved in the research, design, manufacture, or messaging” of autonomous systems and AI to go beyond the search for more computational power or the attainment of purely functional goals and technical solutions (IEEE 2016, p. 3). It calls on them to place human well-being, empower-ment, and prosperity at the core of their pursuits, and to ensure that technology choices are “-thoroughly scrutinized for social costs and advantages that will also increase economic value for organizations by embedding human values in design” (IEEE 2016, p. 36).
Socially Responsible Automation (SRA):
Motivated by the above considerations, we introduce the vision, concept, and framework of socially responsible automation to help technologists and business leaders drive the evolution of automation for societal good. This aspirational vision is grounded on two principles:
Our definition of automation encompasses mechanized physical labor as well as information-based cognitive work (“knowledge work”) and combinations of these. Also, while the term “human-centric” (or “human-centered”) automation has been used by some researchers (e.g., Oishi et al. 2016) in the context of safety and efficiency of human-technology interaction in semiautonomous systems, we use human-centric to refer to approaches that broadly support the professional, social, and economic well-being of humans in a world of ubiquitous automation.
We define, describe, and illustrate the SRA vision using a four-level conceptual model that captures current industry practices as well as envisioned future approaches to automation. The SRA pyramid (figure 1) provides a simple but powerful visual aid for guiding automation strategy development.
Level 0: Cost-Focused Automation
At the lowest level of the model are approaches to automation that are predominantly cost-focused: economic benefits from labor reduction drive technology decisions. Such cost-based programs are not only not socially conscious or human-centric, they also often fail to deliver, are unsustainable, or even end up being -detrimental to business interests.
Consider, for example, the business process out-sourcing (BPO) industry whose core business model is based on labor arbitrage and the availability of inexpensive human capital in developing countries. Rising costs of doing business in once preferred destinations such as India and China have driven increasing interest in new technologies such as robotic process automation (RPA), the use of software “bots” (IRPAAI 2015) for repetitive, high-volume tasks. Considered a disruptive trend, RPA holds tremendous promise for the BPO industry. However, success has so far been limited (Edlich and Sohoni 2017; Rutaganda et al. 2017) in part because of (1) a piecemeal approach to automation that fails to address systemwide implications and outcomes; (2) failure to account for the subtle but vital roles of humans in -handling complex, nonstandard, and changing situations; and (3) the (hidden) costs of automation itself.
Level 1: Performance-Driven Automation
At the next level of automation, productivity and other performance metrics such as accuracy, scalability, speed, quality of service, and flexibility drive design and technology choices. Performance-focused approaches address several of the shortcomings of Level 0 automation by taking an end-to-end system view that is cognizant of the role of the human in the loop. Processes and systems are reengineered to take advantage of the benefits of automation while leveraging human skills and capabilities to supplement and overcome the limitations of technological solutions.
As an example, consider the retail giant Amazon’s judicious integration of human and machine skills in its warehouses, where employees “pick, pack, and stow” goods while robots handle the transportation of loaded bins and shelves. Thus, robots do the routine tasks and “heavy lifting” that they are best suited for and humans perform tasks that require dexterity and flexibility that robots cannot yet do. This large-scale automation is reported to have resulted in significant reductions in “click to ship” cycle times and operating costs (Wingfield 2017).
Level 1 automation approaches move beyond cost efficiencies, but they are still driven primarily by business metrics without taking account of workforce implications or the societal costs and benefits of technology.
Level 2: Human (Worker)-Centered Automation
Human-centered automation approaches explicitly acknowledge and emphasize the critical and valuable role of people in human-machine cooperative systems. They are based on the idea that the ultimate goal of automation is not to sideline people or replace them with machines but to encourage new forms of human-technology interaction, augment human capabilities, and create new roles for people. The business goals are not just performance optimization but also worker development and enrichment. In comparison to the previous two levels, Level 2 automation is not technology-centric but, as the term makes clear, worker-centric. It is the first step in socially responsible automation practices.
Toyota exemplifies the adoption of human-centric automation practices with its philosophy that “robots are not the strategic centerpiece, but merely enablers and handmaidens, helping assemblers do their jobs -better, stimulating employee innovation and when possible facilitating cost gains” (Rothfeder 2017). On Toyota’s manufacturing lines, workers don’t just troubleshoot and fix problems; they produce goods manually first, then continually innovate and simplify processes; once they perfect a process, the machines take over. In some cases Toyota has even eliminated automation so that workers retain their core expertise and skills and remain cognizant of the criticality of their roles in the company’s mission.
Far from considering human workers as an expense to be avoided, Level 2 approaches leverage human capabilities to derive more business benefits in a manner that is workforce empowering. However, strategies and choices are still viewed within the sphere of the organization and not those of the broader business-society ecosystem.
Level 3: Socially Responsible Automation
At the highest level of the model is SRA: the technology choices, business strategies, innovation approaches, and management practices that move the affordances of automation beyond cost and performance efficiencies toward profitable and sustainable growth, with more and better jobs driving economic development and social cohesion. Thus, SRA centers on two core goals: driving growth through automation while promoting both economic performance and societal well-being.
Automation is inherently labor-reducing: “the structural dynamics of the economic system inevitably tend to generate what has rightly been called technological unemployment. At the same time, the very same structural dynamics produce counter-balancing movements which are capable of bringing the macroeconomic condition [of full employment] toward fulfilment, but not automatically” (Pasinetti 1981, p. 90; emphasis in original). While productivity gains from automation may lead to increased demand for a company’s goods and services—increasing, in turn, the demand for labor—such outcomes occur only under the right conditions of labor supply, income levels, and demand for goods (Autor 2015).
Realizing the goals of SRA therefore will require explicit, active interventions, such as economic policies (Pluess 2015), and/or, as we suggest, simultaneously exercising the twin levers of automation and innovation. In other words, proactive, conscientious, and systematic identification of opportunities for new revenue streams and job-enabling growth should be an integral part of a business’s automation strategy while leveraging the cost efficiencies and operational enhancements that automation provides.
Toyota is a great example, not just for human-centric automation but also for its SRA practices. The company’s sustained growth and competitive positioning as an industry leader are the result of a judicious combination of the use of automation, innovation, and sound management practices. Toyota’s strategy does not primarily target labor to reduce production expenses but instead is based on the smart use of materials, the design of parts to maximize performance and fuel efficiency, a platform-based approach for more economical global sharing of engine and vehicle models, and emphasis on lean processes that enable zero-downtime flexible manufacturing. With these measures, the company has continued to enjoy the top spot in sales and industry profit margins through the years, along with an expanding global workforce (Rothfeder 2017).
The next example is one that has become a poster child for the success of small business manufacturing in the automation era (Fishman 2013). Faced with declining demand for its products and rising competition, Marlin Steel, once the “king of the bagel baskets,” reinvented itself through a series of remarkable measures. It made significant forward-looking investments in robotics and automation; reengineered its production processes; enhanced its product line to manufacture high-value, highly engineered custom metal wire products; and expanded its client base to new markets and customers. In addition to these structural measures, it invested in its people, equipping them with the skills and training necessary to survive and grow in the new technology-driven workplace. By taking this -innovation-driven, business-focused, human-centric, and responsible approach to automation, Marlin Steel has grown in its revenue, competitiveness, and employee base.
Our four-level construct may remind the reader of Carroll’s (2016) pyramid, the well-known model of corporate social responsibility (CSR). Indeed, our aspirational view of SRA is guided by the literature on CSR, a rich and mature field with theoretical underpinnings in the disciplines of business ethics, economics, and moral philosophy (Godfrey and Hatch 2007). In particular, we believe that SRA aligns best with the stakeholder theory of CSR. We also note the connection between SRA and ethical AI, which addresses a broader set of values (e.g., human rights, fairness, bias, transparency, and privacy; IEEE 2016) beyond the labor and workforce implications that are our focus in this paper.
Realizing the SRA Vision
Realizing the goals of SRA requires an organization’s development and implementation, at many levels, of robust business, innovation, design, and technology strategies that are all aligned with and reinforce each other (figure 2). Drawing from a variety of disciplines—business ethics, innovation management, and sociotechnical systems design—we highlight below selected frameworks and methodologies relevant to each of these strategic planks.
Business (Ethics) Strategy
A high-level business strategy for SRA begins with the question, “How can we fuel growth and enable job creation through automation?” To move automation beyond cost and performance efficiencies toward profitable, sustainable business growth with more and better jobs, the SRA approach identifies ways to (1) align a firm’s commercial interests with societal values and (2) make social goals integral to an organization’s core business model.
In a highly cited Harvard Business Review article, -Porter and Kramer (2011) propose the principle of shared value: the idea of creating economic value in a way that also creates value for society. In this view societal needs, not just traditional economic needs, define markets, and the purpose of a corporation is to create shared value, not just profits. Companies that better connect their success with societal improvement open new avenues for innovation, new products, and new customers, all of which expand markets, create differentiation, and drive economic value and growth.
SRA can be thought of as an instantiation of the shared value concept in the context of automation. In this case, the shared value principle would guide firms to ask the following questions:
Our thinking on SRA is also influenced by the “common good” principle in -ethics (-Velasquez et al. 1992). With roots in the writings of -philosophers such as -Plato, Aristotle, and Cicero, a contemporary definition of common good comes from the political and moral -philosopher John Rawls (1999, p. 233): “maintaining conditions and achieving objectives that are similarly to everyone’s advantage.” While not without its challenges (-Velasquez et al. 1992), the common good principle not only provides a framework for technologists to consider the values supported—or compromised—by their choices but also helps them formulate and articulate the rationale for their decisions, which is key for stakeholder transparency (IEEE 2016). For other -ethics-based approaches that may be more suitable for specific organizations and situations of automation deployment, we refer the reader to Velasquez and colleagues (2009).
Driven by and closely aligned with a firm’s business strategy are its innovation goals. Sustained job creation, at the heart of SRA, requires innovations of many kinds beyond the commonly recognized forms of product and process innovations. Sawhney and colleagues (2006) identify 12 ways for companies to innovate, with concrete examples of successful innovation strategies that leverage more than one and often several of these dimensions. The 12 categories for innovation are anchored by offerings, customers, processes, and presence (the who, what, how, and where of the business), supplemented by platform, solutions, customer experience, value capture, organization, supply chain, networking, and brand.
Both Marlin Steel and Toyota leverage innovations in product, process, and organization (i.e., changing a firm’s form, function, or activity scope, including employee roles and responsibilities) as part of their growth strategy. We note further -Marlin Steel’s successful use of customer innovation (discovering new customer segments) and Toyota’s platform innovation (using common components or building blocks to create derivative offerings), all alongside their core -automation efforts. Figure 3 provides an illustrative representation of the multi-dimensional innovation profile of these two companies using the innovation radar devised by -Sawhney and colleagues (2006).
Finally, we note that in this era of digitalization and the fourth industrial revolution, automation has even stronger potential to drive growth by enabling smart products and smart services.
Key to developing and implementing a robust SRA program is a broad systems design perspective. As depicted in figure 4, the “system” scope progressively expands at higher levels of the pyramid, from the physical and software infrastructures to human-technology integrated work environments to the business and social ecosystems supported and impacted by the technology. This calls for suitable systems design philosophies and approaches, two of which we highlight here.
Value-sensitive design (VSD), a concept that originated in computer ethics, “is a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner throughout the design process” (Friedman et al. 2008, p. 70). VSD is an iterative approach that involves identifying stakeholders affected by the technology; understanding their views, preferences, and behaviors through quantitative and qualitative social science methods; and studying how specific technologies in specific contexts support or harm human values.
Another powerful systems design approach for automation is what Autor (2015, p. 23) characterizes as environment reengineering, the process of “radically simplify[ing] the environment in which machines work to enable autonomous operation.” The “design for automation” philosophy is exemplified by Amazon’s retail automation, robotic surgeries, and business process reengineering (Hammer 1990) in the services industry, where workflows and environments are redesigned to optimally leverage the complementary skills of robots and humans.
We highlight here some research challenges broadly categorized under human-technology cooperative work and integrated design tools and environments to support socioeconomically optimal technology choices for automation. The first category concerns problems primarily at the intersection of control theory and cognitive sciences; these include optimal task allocation between humans and automated processes, real-time feedback control and adaptation in a cyber-human shared governance model, fail-safe operation of semiautonomous systems, and adaptive software systems for work automation. A quick scan of relevant literature indicates that many of these problems are beginning to be addressed in various technical communities.
For integrated design tools, we believe that frameworks such as the Digital Twin that enable modeling, analysis, and evaluation of design choices in the manufacturing domain can be effectively extended to support both the design of human-technology collaborative environments and the evaluation of technology alternatives. However, significant work remains to be done to include rich human behavior modeling, worker performance modeling, and socioeconomic analysis in this framework. To the best of our knowledge, no such integrated paradigms exist outside the manufacturing domain for knowledge work automation.
As Rotman (2017) writes, “The economic anxiety over AI and automation is real and shouldn’t be dismissed. But there is no reversing technological progress.” The key is to implement measures that enable everybody to benefit from these transformative technologies and turn AI and automation into forces for shared prosperity.
In this paper we aim to help technologists and business leaders realize this vision by providing a comprehensive framework that looks beyond today’s prevailing practices and provides a systematic, structured way to frame choices, assign priorities, and design robust strategies. We also discuss the indispensable role of innovation in realizing SRA and, with examples, show that, as with CSR, there is a clear business case for SRA. We hope to inspire and help shape a future where automation and AI work for all.
Atkinson RD. 2017. In defense of robots. National Review LXIX(7), April 17.
Autor DH. 2015. Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives 29(3):3–30.
Brynjolfsson E, McAfee A. 2017. The business of artificial intelligence. Harvard Business Review, July.
Bughin J, Manyika J, Woetzel J. 2017. A Future That Works: Automation, Employment, and Productivity. McKinsey Global Institute.
Carroll AB. 2016. Carroll’s pyramid of CSR: Taking another look. International Journal of Corporate Social Responsibility 1(3).
Edlich A, Sohoni V. 2017. Burned by the Bots: Why -Robotic Automation Is Stumbling. New York: McKinsey & -Company.
Fishman C. 2013. The Road to Resilience: How Unscientific Innovation Saved Marlin Steel. New York: Fast Company.
Friedman B, Kahn PH, Borning A. 2008. Value sensitive design and information systems. In: The Handbook of Information and Computer Ethics, eds Himma KE, Tavani HT. Hoboken NJ: John Wiley & Sons. pp. 69–101.
Godfrey PC, Hatch NW. 2007. Researching corporate social responsibility: An agenda for the 21st century. Journal of Business Ethics 70(1):87–98.
Hammer M. 1990. Reengineering work: Don’t automate, obliterate. Harvard Business Review, July-August.
IEEE [Institute of Electrical and Electronics Engineers]. 2016. Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems. Piscataway NJ: IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.
IRPAAI [Institute for Robotic Process Automation and Artificial Intelligence]. 2015. Introduction to Robotic Process Automation: A Primer. Online at https://irpaai.com//-wp-content/uploads/2015/05/Robotic- Process-Automation-June2015.pdf.
NASEM [National Academies of Sciences, Engineering, and Medicine]. 2017. Information Technology and the US Workforce: Where Are We and Where Do We Go from Here? Washington: National Academies Press.
Oishi MMK, Tilbury D, Tomlin CJ. 2016. Guest editorial special section on human-centered automation. IEEE Transactions on Automation Science and Engineering 13(1):4–6.
Pasinetti LL. 1981. Structural Change and Economic Growth. Cambridge: Cambridge University Press.
Pluess JD. 2015. Good Jobs in the Age of Automation: Challenges and Opportunities for the Private Sector. San Francisco: Business for Social Responsibility.
Porter ME, Kramer MR. 2011. Creating shared value. Harvard Business Review, January-February.
Rawls J. 1999. A Theory of Justice. Oxford: Oxford University Press.
Rothfeder J. 2017. At Toyota, the Automation Is Human-Powered. New York: Fast Company.
Rotman D. 2017. The relentless pace of automation. MIT Technology Review, February 13.
Rutaganda L, Bergstrom R, Jayashekhar A, Jayasinghe D, Ahmed J. 2017. Business models: Avoiding pitfalls and unlocking real business value with RPA. Capco Institute Journal of Financial Transformation 46:104–114.
Sawhney M, Wolcott RC, Arroniz I. 2006. The 12 different ways for companies to innovate. MIT Sloan Management Review, Spring.
Velasquez M, André C, Shanks T, Meyer MJ. 1992. The -common good. Issues in Ethics 5(1).
Velasquez M, Moberg D, Meyer MJ, Shanks T, McLean MR, DeCosse D, André C, Hanson KO. 2009. A framework for ethical decision making. Santa Clara: Markkula Center for Applied Ethics.
Wingfield N. 2017. As Amazon pushes forward with robots, workers find new roles. New York Times, September 10.
 The radars in figure 3 are not based on a rigorous assessment of the two companies and are to be interpreted as qualitative representations of their innovation strategies.
 For an excellent overview of fundamental technologies and advances driving the proliferation of autonomous and intelligent systems, see NASEM (2017) and references therein. Brynjolfsson and McAfee (2017) provide a balanced review of the capabilities and limitations of current technologies.
 For example, the “future of work at the human-technology frontier” is one of the National Science Foundation’s 10 Big Ideas (https://www.nsf.gov/news/special_reports/big_ideas/human_ tech.jsp).