In This Issue
Spring Bridge on the US Metals Industry: Looking Forward
March 29, 2024 Volume 54 Issue 1
In this issue of The Bridge, guest editors Greg Olson and Aziz Asphahani have assembled feature articles that demonstrate how computational materials science and engineering is leading the way in the deployment of metallic materials that meet increasingly advanced design specifications.

Concurrent Design of Materials and Systems

Monday, April 8, 2024

Author: Charles Kuehmann

Materials concurrency has the potential to radically accelerate innovations in materials-intensive manufacturing.

Computational materials design and integrated computational materials engineering (ICME) have revolutionized new materials development, accelerating the introduction of new materials and expanding the scope of the materials engineering enterprise. Materials development timelines have been compressed from decades to as short as a few years, and the fundamental understanding and the systems engineering of materials have introduced robust and manufacturable solutions (NRC 2008). This leap was enabled by the ­deconstruction of the conventional materials tetrahedron into the systems engineering framework employing a linear process-structure-properties-­performance concept (­Kuehmann and Olson 2009; Olson 1997), as well as the development of model­ing strategies to compute the process-structure and structure-property links the framework provides. Metals have led the way (Olson and Liu 2023) with well-developed examples (Kuehmann and Olson 1995), but polymers, ceramics, and composites are making strides, especially within the Materials Genome Initiative (MGI) (Office of Science and Technology Policy 2011).

In particular, NIST has championed progress across many broad classes of ­materials, as exemplified by the Center for Hierarchical Materials Design (CHIMaD), a Chicago-based consortium with efforts in thermoelectrics, polymers, and composites (de Pablo et al. 2019). However, computational materials design and development have continued to be relatively isolated endeavors, operating independently of engineering design and impacting new systems only after the materials development process is largely complete.

While accelerated within the computational domain, the process of materials development has largely remained unchanged, progressing through stages of prototyping, scale-up, and qualification before being handed off to the design community for implementation. Traditionally, the barrier to the integration of materials development into hardware design and development existed due to the significant mismatch in the time scales of these activities. Hardware systems could be designed and iterated within a few years, and thus they could not wait for the introduction of a new material with advantageous new properties that might take ten to twenty years to reach the market while carrying significant risks (NRC 2008). The 2008 NRC study identified the need for ICME to accelerate the qualification and validation of computationally designed materials but also identified the technical gaps and benefits of integrating computational materials ­models into traditional engineering design and simulation tools utilized in product design. However, this by itself is ­insufficient to enable true concurrent engineering of materials and systems.

Concurrent Engineering and Computational Materials Engineering

The systems engineering embedded in the computational materials engineering approach provides the connection of key materials properties to system-level performance and identifies the processing steps that determine cost and manufacturing complexity. Process-structure and structure-property models add the capability to employ fundamental relationships into the materials trades for candidate materials, allowing them to be included in the overall design strategy and solution. Further leverage is obtained by the inherent capacity of the approach to estimate the impact of process variability and stochastic property responses on the distribution of final properties (Xiong and Olson 2016). Because these distributions impact design choices, and minimum material properties are often controlling in design, the ability to predict them from fundamentals could accelerate the incorporation of new materials into designs, in the same ­manner that dynamic crash simulations have expedited new automotive structural design. Computational materials design and the validation and qualification framework that ICME provides compose the toolbox that enables concurrent materials and product design, but they are not the only requirements.

Computational materials design and development
have continued to be relatively isolated endeavors, operating independently of engineering design and impacting
new systems only after
the materials development process is largely complete.

The design of any hardware system in the broadest sense is the selection of a set of design variables to satisfy a set of objectives. The objectives are delineated in a set of system requirements by which all proposed design solutions are measured. In most modern complex systems, the design problem is broken down into manageable subsystems, with design requirements established at the subsystem level to allow work to proceed on each without the need for much interaction across the whole. If this occurs within a new organization, such as a startup, teams can be organized around a logical classification driven by the design. The teams are generally small, and communication between them is fluid; thus design trades across teams are possible. Negotiation of requirements from one subsystem impacting another can be achieved, thereby ensuring a significant level of global optimization. In mature organizations, however, this becomes more difficult. Established department structures are not likely optimal for the new design challenge at hand, and thus the division of the problem into subsystems is driven by organizational history rather than the best engineering approach. This leads to suboptimal designs that reflect the organizational structure rather than an innovative solution borne of understanding the fundamentals and exploring creative solutions.

In the automotive sphere, for example, most established companies organize engineering teams around body, chassis, drivetrain, interiors, closures, electronics, and, increasingly, software. Unsurprisingly, automotive factories, or even tier 1 suppliers, are generally organized around similar scopes. Each of these divisions strives to optimize their specific scope based on relatively fixed requirements for their deliverable. As this situation has existed for many decades, the opportunity for significant and fast innovation is small.

Kuehmann Fig 1.gifThus, in established industries, both the designs and the manufacturing method begin to be dictated by the organizational structure rather than the other way around. In addition, since communication is weakest across these boundaries, the product and the manufacturing system are also weakest across them. This organizational ­inertia results in design solutions that represent local optima and limits the opportunity for true innovation around a new global solution. This disruption is further heightened in the sphere of materials engineering, where traditional methods of development are far out of sync on a time basis with established product development timelines. Product design very rarely incorporates new materials as a design variable used in overall design trades, unless the new material has been fully developed prior to the design exercise. While computational materials design methods have now demonstrated that new mate­rials for specific applications can be developed on timelines equivalent to product development efforts, historical organizational barriers mean few teams and engineers have the skills and experience to integrate new materials into the design landscape of their projects. This greatly limits the design space that can be considered when developing innovative design solutions. In addition, the lack of interaction between materials and product engineers restricts the scope of learning both groups experience. Concurrent engineering of both materials and products creates the opportunity to drive better innovative solutions to problems but also creates better engineers to solve the future problems we will encounter.

To successfully execute concurrent materials and systems engineering, we also need a revolution in the building-block approach to systems engineering itself. The legacy of systems engineering, breaking down a design problem into sub­systems and components and integrating these into a sequential process of design, implementation, and verification, further slows and limits the ability to quickly execute on new design solutions (figure 1). In this model, a system is ­deconstructed into its sub­systems and components, decomposing the requirements at each level before implementing it in hardware and software. Then validation is accomplished in reverse across components, subsystems, and finally the full system. This approach delays many failures until late in the process, and thus drives significant redesign, implementation, and test rework. Critically, by delaying this learning cycle until very late in the process, and since failure drives important learning, a traditional approach hampers the ability to use feedback early in the concept stage of design, where the most impact can be made. In a concurrent approach, design, build, and test are accomplished in sync with each other, where failures in both manufacturing and performance can be assessed readily and quickly and incorporated into better design solutions. Operational validation can also accelerate the pace of development of the hardware and its capabilities, further providing feedback quickly into the overall process. The operational envelope, if ini­tially limited to let the hardware and design mature, can expand the operational characteristics to match a higher level of confidence as the full design goals are achieved. Operational validation is a gold standard, as validation testing often doesn’t fully represent the operational environment, and if pushed late in the process, it risks costly and disruptive failures, as seen in the 737Max program (Sgobba 2019) and others.

Kuehmann Fig 2.gifThe Algorithm

Concurrent engineering, utilizing computational mate­rials design and a development approach that integrates design, build, and test with early operational validation, is a powerful tool to accelerate development, but it is not sufficient to achieve breakthrough solutions. To create optimal solutions to technical challenges, it is absolutely necessary to avoid our inherent complexity bias. By a factor of 3 to 1, engineers will add complexity to a system when solving a problem, even when the deletion or simplification of components in the system is an equally valid path. As engineers, we want to create, making something that didn’t exist previously. It seems antithetical to many engineers that the most important task in engineering may actually be to take something that exists and delete it. To achieve true innovation in design and combat this inherent complexity bias, it’s critical to apply a five-step process as developed by Elon Musk and known internally to SpaceX and Tesla as the Algorithm.

Step 1 – Question all requirements

The first step in any development program is the listing of requirements for the system. These elucidate the performance characteristics, attributes, regulations, costs, and other factors that must be considered in the design. The process typically tries to identify everything that could possibly constrain the system. This is exactly the drawback of defining too many requirements: it limits the ability to consider a wide range of solutions. Any constraint that isn’t a real constraint will impact the design and limit its ability to achieve the most important objectives. And often, requirements that are carried over from previous programs, while sounding prudent, may not be applicable in the context of the current effort. To achieve the smallest set of applicable requirements, they must be: 1) Based on fundamentals (i.e., driven by physics), 2) Defended by a specific person. A constraint imposed by a group, agency, or committee has no one accountable for its applicability, and 3) Tied to a single metric for success of the project. If the requirement has no impact on ultimate success, it must be suspect.

Step 2 – Delete the part or process

The first goal of an engineer should be to determine how the system can be successful without the part or process for which they are accountable. Any work to optimize, build, and test a part or process that ultimately is not needed is unnecessary work and leads to unintended negative consequences.

Step 3 – Simplify

If the part or process cannot be deleted, it is important to make it as simple as possible. Remove as many characteristics as possible while still achieving a result that allows the system to be successful.

Step 4 – Accelerate

Primarily for manufacturing and processes, faster is ­better. A process at high speed inherently requires simplicity and reduces cost. It also allows faster iteration, learning, and feedback into the development process. Slow processes result in a long lag between any changes and the ability to determine the impact on the design.

Step 5 – Automate

As a last resort, automation can be used to solve an engineering problem. Automation can respond to variations in operational parameters and manufacturing conditions to achieve success, but at a significant cost in complexity.

To create optimal solutions
to technical challenges,
it is absolutely necessary
 to avoid our inherent complexity bias.

The steps in the Algorithm appear intuitive when viewed in sequence, but many examples exist where engineering solutions are pursued in the opposite sequence with negative results. Automation is applied to control a process that is sensitive to input variables, then sped up to account for high fallout from poor quality and subsequently simplified to make it easier to manufacture, before it is deleted in its entirety when it is discovered the requirements that drove the design of the part aren’t valid. If the Algorithm is always employed in the right sequence, all this unnecessary work is avoided.


Kuehmann Fig 3.gif

Gigacastings: An Example

Extremely large structural castings (gigacastings) in automobiles have made significant strides in advancing the manufacturing of automotive structural bodies. These ultra-large castings provide a significant section of the body structure in a single component, significantly lowering the cost, reducing weight, and simplifying the manufacturing complexity of vehicle structure. While high-­pressure die castings have been used in car ­bodies previously, they have only been used for smaller nodes and backup structures due to their limited size, low ­ductility in crash events, and dimen­sional stability.

Employing the Algorithm for the development of gigacastings was essential to its success. In step one, requirements were challenged extensively to expand the functional capabilities of large castings. The current extent of the size of casting presses and dies was found to have no limitations based on fundamental ­physics, and thus large presses of more than 6000 tonnes were designed and commissioned. Existing structural aluminum high-pressure die-cast materials required post-casting heat treatment to achieve strength and ­ductility in structural applications (­figure 2) but were not capable of meeting dimension tolerances after heat treating. Computational mate­rials design methods expanded the strength and ductility envelope for non-heat-treated aluminum casting alloys, and creative part design, incorporating these new properties, allowed castings with up to two-meter flow lengths to be employed in the demanding components of vehicle crash structures (figure 3).

In step two of the ­Algorithm, the extent of the casting was expanded ­iteratively by evaluating each part that was connected to the casting and determining if it could be incorporated into the casting with positive impacts on cost and performance while still maintaining manufacturability (figure 4).

In step three, the structure was further simplified by examining all the secondary attachments to the structure for interior components, harnesses, and other items. Attachment points that could be included in the casting were designed in, eliminating a clip, bolt, or other part and further reducing the part count. Quenching of the casting when ejected from the die was removed, further reducing distortion and eliminating a process step. Without heat treatment to burn off die lube, typically a cleaning process would be needed prior to the application of structural adhesives or e-coating of the car body. The development of a lube compatible with the structural adhesives and the e-coating process was accomplished, eliminating a secondary cleaning step. And finally, a large integrated structural casting would be difficult to repair in the case of a significant crash event, leading to a complex repair if even possible. Therefore, specific points in the casting were designed to allow the removal of the portion of the casting that absorbs crash energy, with a replacement service part then inserted to restore the structure. This significantly simplified post-crash repair.

Kuehmann Fig 4.gifIn step four of the Algorithm, the casting process was significantly accelerated through the design of advanced cooling technologies within the die. Additive manufacturing allowed significant freedom in the design of the cooling loops in the die, and materials design for new additive and conventional die alloys has expanded their cooling characteristics while improving die life. Advanced but simple thermal backend systems allowed better control of flow and temperature conditions in the die.

Step five employed further automation in the process to reduce cycle time and control process variables. Metal delivery to the shot sleeve, application of die spray, and extraction of the part post-casting were all optimized and resulted in an over 60% reduction in cycle time of the process while improving yield.

Conclusions

Advances in computational materials design and their extension to ICME are essential to the concurrent design of systems and materials, opening new opportunities for engineering solutions to many challenging problems facing our society. In conjunction with a product development pathway that collapses design with build/test sequences, concurrent engineering of materials and systems can be achieved, accelerating the introduction of new innovations. The application of a design and manufacturing method called the Algorithm greatly streamlines the process of development, incorporating concurrent materials design and essentially determining not only what can be achieved but also what should be done to meet the ultimate objective. The benefits of doing so have been achieved in structural applications for metals but aren’t inherently limited to them. Entrenched design processes and cultures, and even organizational structures, are significant barriers to achieving the accelerated feedback needed to allow concurrent engineering to succeed. Much work is still needed to expand the impact of concurrent design of materials and systems across wider sections of industry and materials classes, but the positive impact is clear.

References

Allouis E, Blake R, Gunes-Lasnet S, Jorden T, Madison B, Schroeven-Deceuninck H, Stuttart M, Truss P, Ward K, Ward R, and 1 other. 2013. A facility for the verification & ­valida­tion of robotics & autonomy for planetary exploration. In: Proceedings of DASIA 2013 DAta Systems In ­Aerospace, ESA-SP Vol. 720. L. Ouwehand, ed. Porto, ­Portugal: DASIA 2013 – Data Systems In Aerospace.

de Pablo JJ, Jackson NE, Webb MA, Chen L-Q, Moore JE, ­Morgan D, Jacobs R, Pollock T, Schlom DG, Toberer ES, and 15 others. 2019. New frontiers for the materials genome initiative. npj Computational Materials 5: article no. 41.

Kuehmann CJ, Olson GB. 1995. Computer-aided systems design of advanced steels. In: Proc. International ­Symposium of Phase Transformations During the ­Thermal/Mechanical Processing of Steel - Honoring Professor Jack Kirkaldy, 345–356. Hawbolt EB, ed. Vancouver, British Columbia: ­Canadian Inst. of Mining, Metallurgy and ­Petroleum.

Kuehmann CJ, Olson GB. 2009. Computational materials design and engineering. Materials Science & Technology 25(4):472–478.

NRC. 2008. Integrated Computational Materials Engineering: A Transformational Discipline for Improved Competitiveness and National Security. National Academies Press.

Office of Science and Technology Policy. 2011. Materials Genome Initiative for Global Competitiveness. National Science and Technology Council. Washington, DC.

Olson GB. 1997. Computational design of hierarchically structured materials. Science 277(5330):1237–1242.

Olson GB, Liu Z. 2023. Genomic materials design: ­Calculation of phase dynamics. CALPHAD 82:102950.

Sgobba T. 2019. B-737 MAX and the crash of the regulatory system. Journal of Space Safety Engineering 6(4):299–303.

Stucki J, Pattinson G, Hamill Q, Prabhu A, Palanivel S, Lopez-Garrity O. 2023. Patent No. US-20230046008-A1, published Feb. 16.

Xiong W, Olson GB. 2016. Cybermaterials: Materials by design and accelerated insertion of materials. Computational ­Materials 2: article no. 15009.

About the Author:Charles Kuehmann (NAE) is the VP of materials ­engineering, SpaceX and Tesla