Chris Pickering | The Engineer 4 Sep 2023
Are two minds better than one? Chris Pickering learns how Cambridgeshire firm Secondmind is applying AI to powertrain calibration
Engineering is the fusion of science and human creativity. It thrives on new ideas. But how much of an engineer’s time is actually spent creating or refining those ideas?
Many hours can be lost in mundane and repetitive tasks such as scheduling test programmes and analysing data. In some cases, there simply isn’t enough time to go through this process for all the design concepts, and potentially promising ideas have to be abandoned.
Perhaps more than any other sector, the automotive industry is experiencing these pressures at the moment. The incoming – and as yet unfinalised – Euro 7 regulations look set to add yet more complexity to ICE development at a time when most manufacturers are already diverting huge amounts of resources into electrification, which is itself a hugely complicated topic.
Artificial intelligence (AI) may hold the key to unpacking the vast data sets generated by these complex systems. It can speed up everyday tasks, automate processes and provide insight that might otherwise remain hidden.
“AI is about efficiency,” comments Gary Brotman, CEO of machine learning specialist Secondmind. “You’re trying to take repetitive tasks, automate them, and take that burden away from a human expert.”
The Cambridge-based company is part of the rich seam of computer science talent that has sprung up around the city’s famous university. It was this same leafy part of England, for instance, that gave us the natural language processing used in Amazon’s Alexa function.
Secondmind focuses on AI for automotive design and development. Its customers already include the Mazda Motor Corporation in Japan, where Secondmind’s optimisation engine is being used to accelerate the process of ECU calibration for hybrid and electric vehicles.
We certainly don’t sit in the doomsday world. Our aim is to be the ‘second mind’ of the engineer, providing them with tools to make them better at what they do and to shift their expertise to new challenges. There’s yet to be any case where I’ve seen where any new algorithms are going to replace 10 or 20 years of knowledge and experience
A modern combustion engine ECU can have well over 50,000 parameters, many of which interact with each other. Calibrating each one by hand becomes an impossible task, even if you have the resources to do so.
There are numerous practicalities to consider. For example, if you’re looking at cold start behaviour in a test vehicle, you only have a matter of moments before the engine warms up, after which it will need to be left for several hours to cool down again. Physical prototypes are also expensive to build and transport, and once you’ve shipped them to, say, a cold climate test there’s no guarantee the weather conditions will actually play ball.
Increasingly, vehicle manufacturers are embracing model-based calibration. Here, the powertrain and the sub systems within it are represented by a mathematical model, which can be tested in place of a physical prototype. As a result, there’s no vehicle preparation or logistics to consider, the tests can be simulated far quicker than real-time and numerous duplicates of the same model can be run in parallel.
But even with an automated approach, there’s simply too much data to run through every single combination of parameters. This is where AI comes in. The technology can do incredible things with large, complex data sets, but it also enables engineers to work smarter, reducing the amount of testing and data collection that’s involved.
“We think of big data – more sensors and more data points – as better for decision making if, say, we’re driving down the road in an autonomous vehicle, but data is also the foe in making design and development processes more efficient,” comments Brotman. “Less data is better for efficiency, whether you’re looking at the early stage design of the vehicle and the systems within it or hardware in the loop calibration. We use just 20 per cent of the data that would typically be required to solve the same problems.”