How to unlock the AI promise

Artificial intelligence (AI) technologies and their applications continue to grow and evolve. AI technologies are now being deployed across almost every industry and sector, including transportation, healthcare, defence, finance and manufacturing. But what exactly are these technologies? How prevalent are they? And with AI developing so rapidly, how will International Standards respond to these challenges?

As artificial intelligence (AI) becomes increasingly ubiquitous in various industry sectors, establishing a common terminology for AI and examining its various applications is more important than ever. In the international standardization arena, much work is being undertaken by ISO/IEC’s joint technical committee JTC 1 [1],  Information technology, subcommittee SC 42, Artificial intelligence, to establish a precise and workable definition of AI. Through its working group WG 4, SC 42 is looking at various use cases and applications. The Convenor of SC 42/WG 4 is Dr Fumihiro Maruyama, Senior Expert on AI at Fujitsu Laboratories.

Female researcher working in a laboratory.

Currently, there are a total of 70 use cases that the working group is examining. Health, for example, is a fascinating area to explore. Dr Maruyama himself describes one use case in which a program undertakes a “knowledge graph” of ten billion pieces of information from existing research papers and databases in the medical field. The application then attempts to form a path representing the likely development from a given gene mutation to the disease that deep learning has predicted from the mutation.

Solutions for health

Dr Radouane Oudrhiri is Chief Data Scientist at Eagle Genomics, whose work involves research undertaken “in silico” – that is, using primarily computer or data-driven innovation. One area of focus is on microbiome, which comprises all of the genetic material of micro-organisms (bacteria, viruses and fungi) within an entire collection, such as the human gut, mouth or skin. Microbiomes aren’t just limited to humans and other animals: oceans, soils and rivers all host microbiome communities that impact entire ecosystems. Microbiome data is very complex as it is hyper-dimensional and compositional. Dr Oudrhiri’s colleagues analyse microbiome data using AI and machine-learning computational tools for spotting associations that humans simply cannot. This radically improves productivity and enables revolutionary discoveries. It identifies new, sustainable ingredients and therapeutic targets and informs safer, more efficient industry practices.

AI technologies have been used to analyse human tumours for some time now, but as Prof. Frank Rudzicz, the Canada representative for SC 42, Director of AI at Surgical Safety Technologies Inc., and Associate Professor of Computer Science at the University of Toronto, points out in an interview for this article, this is just one application of several. For instance, an application has been deployed recently to identify early-onset dementia in more elderly patients. Residents at care facilities, normally assessed by a doctor once every six months for 15 minutes, have instead been issued with a computer tablet and asked to respond verbally to a series of questions. The program then alerts the medical team if anything seems awry, such as a change in the patient’s voice patterns, or if they seem unable to spot obvious relationships in an image of a family group.

Dr Oudrhiri has also been working with a company that has developed an AI solution, initially designed to make shoes smarter by collecting biomechanical metrics, measuring aspects such as shoe usage and sporting performance. It works via a chip inserted into the sole. The application has been so successful that advances in technology will soon allow it to be used for the detection of the likelihood of developing diseases – such as Parkinson’s – just by analysing the way in which an individual walks.

The AI of everything

An electrician monitors the telecommunication system at an offshore wind farm.

Health, of course, is not the only field that the work of SC 42 will impact. Dr Maruyama also cites the example of an AI program that uses ultrasonic waves for inspecting wind turbines. The program flags up any portions of the turbines that may have defects, clearing the way for its human inspection experts to make an informed choice about any subsequent course of action. Crucially, as the program is undertaking the initial inspection, time is freed up for human experts to inspect more turbines.

Intelligent transportation systems (ITS) is another sector that already relies heavily on AI. Dr Mahmood Hikmet, Head of Research and Development at Ohmio Automotion, a company that focuses on ITS, points to lidar technology, which measures distance to an object by the use of laser light rather than sound or radio frequency. If several of these laser lights are stacked on top of one another and spun round at top speed, the result is a three-dimensional “print cloud” showing how far away a given object might be. All of this takes place at “tens or hundreds of times a second”. It’s an application that can be used in driverless cars, with the capability even to distinguish between different blades of grass.

Dr Hikmet also highlights crowd-counting AI for driverless cars, a predictive analysis application drawn from data on the infrastructure side of ITS (as opposed to the cars themselves). This involves cameras that monitor people walking back and forth, tracking them throughout an entire shot, whilst predicting their likely “route” as they interact with others. This data is then picked up by the car and used to prevent any possible collisions.

Behaviour training for machine learning

Medical scientists discuss CT brain scan images on a computer screen.

YOLO – You Only Look Once – is object-recognition technology that separates to the tiniest degree the different aspects of disparate objects. It has obvious applications in safety and security contexts. Behavioural cloning is another field of AI, in which a machine is obliged to learn a series of tasks through reinforcement training. It’s “a way of punishing and rewarding a neural network for doing things right or wrong,” Dr Hikmet explains. The network ends up learning from the reward or punishment signals it receives from the human user as to how it is supposed to “behave”.

Venture capital is key to certain aspects of Dr Oudrhiri’s work. One exciting area of research seeks to digitize and systematize nothing less than “the entire entrepreneurship process”. By gathering data throughout the venture life cycle, identifying innovation challenges and categorizing information, the platform provides predictive models on a company’s performance, growth potential and valuation. A risk profile is therefore established, assisting in the selection process and the entire start-up evolution. Until now, information of this kind has been collected through human responses to surveys. These are more aggregate in nature, do not lend themselves to easily-built predictive models, or often lead to unwittingly biased conclusions. After all, it is only natural that company owners will want their projects to succeed.

These examples are as ingenious as they are effective. And yet the vast majority of us are unlikely to have heard of these specific AI technologies, still less to have an awareness of their impact. Current AI solutions are often developed in silos and built for very specialized applications; their true power will be properly realized when they are considered in a holistic framework, such as the horizontal frameworks SC 42 is developing.

A role for standards

For this and other reasons, International Standards are now under development. Dr Oudrhiri suggests that standards are needed to “cut through the hype” so that fears and objections to AI can be either taken on board or simply rebutted as groundless. Radical ideas for AI applications are often promoted with great fanfare in the media and other public forums – for better or worse – yet, as Dr Maruyama points out, many, if not most of these ideas never get past the Proof of Concept (PoC) phase.

Consumers do need to be protected – from physical harm, certainly, but also from companies that use the phrase “artificial intelligence” as a way of promoting a product simply to spike its share price. And given that AI data is at the intersection of many different fields – software engineering, neuroscience, decision making – it is hugely important that a common framework is developed, so that consumers, producers and regulators can speak a common language.

This is not as ridiculous or unlikely as it first sounds. Experts talk of “AI winters” in which previous generations of AI technology peaked, only to fall away because of misplaced experimentation and consequent withdrawal of funding. The same could possibly happen again and undo much of the progress in today’s world.

STATE OF THE PRACTICE

Hand holding a smartphone displaying augmented-reality image of a tomato.

It is precisely because AI technologies are developing so quickly that International Standards are so needed. In the words of Dr Oudrhiri, they should focus on the “state of the practice, not the art”. SC 42 has already produced draft technical reports, with standards under development. The subcommittee is working with technical committee ISO/TC 69, Applications of statistical methods, on mapping both terminologies and concepts within the machine-learning world, between statistics, software engineering, AI, data science, and operational research. An entire working group – SC 42/WG 3 – is looking solely at trustworthiness.

Dr Maruyama believes the best approach to developing International Standards is to converge around a limited number of alternatives, and to “focus on where technology is already stable”. A common language and criteria are being created to get beyond the PoC stage. Another area of focus is describing the process and life cycle for developing AI applications. They will also help capture the broad requirements of consumer needs, which must include the ethical and societal considerations in use cases and applications. A third area focuses on model validation. This is highly technical and statistical in nature, but will one day ensure that programs and machines will do what they are supposed to be doing.

  1. ISO/IEC JTC 1 is the joint technical committee formed by ISO and its sister organization, the International Electrotechnical Commission (IEC), to serve as a focal point of standardization in information technology.
By |2019-11-11T07:57:10+00:00November 11th, 2019|Weld Engineering Services|Comments Off on How to unlock the AI promise

It’s all about trust

Artificial intelligence (AI) has the potential to aid progress in everything from the medical sphere to saving our planet, yet as the technology becomes ever more complex, questions of trust arise. Increased regulation has helped to rebuild this trust, but grey areas remain. How can we ensure AI is trustworthy without impeding its progress?

Close up view of 52 Facebook notifications on a smart phone.

Using our personal data without authorization to spam us with products to buy is one thing, but quite another is when it is used in an attempt to manipulate politics. This was best demonstrated in the Cambridge Analytica affair, where millions of Facebook profiles of US voters were harvested to build a software system that could target them with personalized political advertising. The dangers of this were well recognized by the US consumer regulator that slammed Facebook with a USD 5 billion fine, but the trust in how organizations use our data was rattled, to say the least. The scandal also exposed the power, and dangers, of badly used artificial intelligence (AI).

But AI is here to stay. Used well, it can help to improve our lives and solve some of the world’s toughest issues. It enables humans and machines to work collaboratively, with the potential to enhance the capabilities of humans and technology beyond what we can even imagine. For organizations, this can mean increased productivity, reduced costs, improved speed to market and better customer relations, amongst other things. This is reflected in a Forbes Insights survey titled “On Your Marks: Business Leaders Prepare For Arms Race In Artificial Intelligence”, which revealed that 99 % of executives in technical positions said their organizations were going to increase AI spending in the coming year.

The technology is developing at lightning speed, raising as many questions about safety and security as the benefits it promises to deliver. If the point is to outperform humans on decisions and estimations such as predicting disease outbreaks or steering trains, how can we be sure we have control?

In AI we trust?

Leading industry experts believe that ensuring trustworthiness from the outset is one of the essential aspects to widespread adoption of this technology. With this in mind, ISO and the International Electrotechnical Commission (IEC) set up joint technical committee ISO/IEC JTC 1, Information technology, subcommittee SC 42, Artificial intelligence, to serve as a focal point for AI standardization. Among its many mandates, the group of experts is investigating different approaches to establish trust in AI systems.

Convenor of the trustworthiness working group within SC 42, Dr David Filip, research fellow at the ADAPT Centre in Trinity College Dublin, a dynamic research institute for digital technology, sums it up: “When software began ‘eating the world’, trustworthiness of software started coming to the forefront. Now that AI is eating the software, it is no big surprise that AI needs to be trustworthy.”

“However,” he analyses, “my impression is that people fear AI for the wrong reasons. They fear doomsday caused by some malicious artificial entity… A far bigger issue, I feel, is that the lack of transparency will allow a deep-learning system to make a decision that should be checked by a human but isn’t.”

Naturally, the level of harm depends on the way in which AI is used. A poorly designed tool that recommends music or restaurants to users will obviously cause less harm than an algorithm that helps to diagnose cancer. There is also the danger of using data to manipulate outcomes, such as in the Cambridge Analytica case.

Threats to trustworthiness

Fully automatic bottling plant in operation.

According to the Organisation for Economic Co-operation and Development (OECD), a collaborative international government body dedicated to furthering economic progress and world trade, malicious use of AI is expected to increase as it becomes less expensive and more accessible [1]. Malicious use, personal data leaks and cybersecurity are key threats to our trustworthiness.

A self-driving car, for example, that is involved in an accident could be hacked and information related to liability meddled with. A system that aggregates patient data and uses it to recommend treatments or make diagnoses could suffer errors or bugs that result in disastrous outcomes.

Other risks include the effects of data or algorithmic bias, a phenomenon that occurs when an algorithm produces results that are systematically compromised due to erroneous assumptions in the machine-learning process. When influenced by racist, prejudiced or other subjective behaviour, this can have a profound influence on everything, from what you see in your social media feed to the profiling of criminals in policy systems, or the processing of immigration claims.

AI systems that require access to personal information also pose risks to privacy. In healthcare, for example, AI has the potential to help advance new treatments by using patient data and medical records in certain ways. But this creates the possibility that data will be misused. Privacy laws reduce that risk but also limit the technology. It is clear that if AI systems are robust, secure and transparent, the eventuality of this happening is removed and their potential can flourish so we can fully reap the benefits.

What is being done

Woman holding her smartphone and printing on a 3D printer.

The industry is very aware of the need for trustworthiness and many technologies have been developed, and are steadily evolving, such as differential privacy, which introduces bits of randomness into aggregated data in order to reduce the risk of re-identification and preserve the contributions of individual users. Other examples include cryptographic tools such as homomorphic encryption and multiparty computation, which allows machine-learning algorithms to analyse data that is still encrypted, and thus secure. Or using a trusted execution environment, which is a technology to protect and verify the execution of legitimate software.

The European Union (EU) formed a High-Level Expert Group on Artificial Intelligence (AI HLEG) to support the implementation of Europe’s strategy on artificial intelligence, which includes ethical, legal and social dimensions. Earlier this year, it published Policy and Investment Recommendations for Trustworthy Artificial Intelligence that set out the group’s vision for a regulatory and financial framework for trustworthy AI.

On an international scale, the Partnership on AI to Benefit People and Society is dedicated to advancing the public understanding of AI and formulating best practices for future technologies. Bringing together diverse global voices, it works to “address such areas as fairness and inclusivity, explanation and transparency, security and privacy, values and ethics, collaboration between people and AI systems, interoperability of systems, and of the trustworthiness, reliability, containment, safety and robustness of the technology”, thus providing support opportunities for AI researchers and other key stakeholders.

“We are a co-founder of the Partnership on AI,” says Olivier Colas, Senior Director International Standards at Microsoft, who also plays an active role in SC 42, “and we’ve forged industry partnerships with both Amazon and Facebook to make AI more accessible to everyone.” He asserts that “as AI systems become more mainstream, we as a society have a shared responsibility to create trusted AI systems and need to work together to reach a consensus about what principles and values should govern AI development and use. The engineering practices that can be codified in International Standards should support these principles and values”. Microsoft, he says, has set up an internal advisory committee to help ensure its products adhere to these principles and takes part in industry-wide discussions on international standardization.

The standards factor

Engineer works a robotic arm from a tablet.

Standards, then, are the key. Dr Filip explains why: “We can never guarantee user trust, but with standardization we can analyse all the aspects of trustworthiness, such as transparency, robustness, resilience, privacy, security and so on, and recommend best practices that make AI systems behave in the intended and beneficial way.”

Standards help build partnerships between industry and policy makers by fostering a common language and solutions that resolve both regulatory privacy issues and the technology required to support that, without stifling innovation. Colas believes standards will play an important role in coding engineering best practice to support how AI is being developed and used. They will also complement emerging policies, laws and regulations around AI.

“International Standards have been successfully used to codify risk assessment and risk management for decades. The ISO/IEC 27000 series on information security management is a great example of such an approach for cybersecurity and privacy,” he says. It helps organizations manage the security of their assets, such as financial information, intellectual property, employee details or information entrusted by third parties. “What’s more, AI is a complex technology,” observes Colas. “Standards for AI should provide tools for transparency and a common language; then they can define the risks, with ways to manage them.”

The time is now

Rear view of humanoid robot with screen on torso displaying directions to ice cream.

The ISO/IEC JTC 1/SC 42 work programme outlines several topics for AI, many of which are currently under development in its working group WG 3, Trustworthiness. Projects include a number of normative documents directly aimed at helping stakeholders in the AI industry build trust into their systems. One example is future technical report ISO/IEC TR 24028, Information technology – Artificial intelligence (AI) – Overview of trustworthiness in artificial intelligence, which analyses the factors that may contribute to the erosion of trust in AI systems and details possible ways of improving it. The document covers all stakeholders and AI vulnerabilities such as threats to security, privacy, unpredictability, system hardware faults and much more.

SC 42 takes a horizontal approach by working closely with as many people as possible across industry, government and related technical committees, so as to build on what already exists rather than duplicating it. This includes ISO/TC 262, Risk management, whose standard ISO 31000 on risk assessment serves as a basis for the development of ISO/IEC 23894, Information technology – Artificial intelligence – Risk management. The new guidelines will help organizations better assess typical risks and threats to their AI systems and effectively integrate risk management for AI into their processes.

The standard will be joined by other important technical reports on the assessment of the robustness of neural networks (ISO/IEC TR 24029-1) and the bias in AI systems and AI-aided decision making (ISO/IEC TR 24027). All of these will complement the future ISO/IEC TR 24368, designed to tackle the ethical and societal concerns thrown up by AI (see article To ethicize or not to ethicize…).

Early consideration of trustworthiness in standardization is essential for ensuring the successful role of artificial intelligence in society. “Humans need trust to survive in every sense,” remarks Dr Filip. This includes trust in technology and infrastructure to be safe and reliable. “We rely on our politicians to put laws and systems in place that protect us, and we rely on the good of humans around us to function in everyday society. Now, we need to be able to trust software and digital technology in all its forms. Standards provide us with a way of achieving that.”

  1. OECD, Artificial intelligence in society. Paris: OECD Publishing, 2019
By |2019-11-11T07:57:10+00:00November 11th, 2019|Weld Engineering Services|Comments Off on It’s all about trust

Embracing the power of technology

Just how worried should we be about killer robots? Amidst all the talk about how artificial intelligence (AI) is threatening society, some experts believe AI shouldn’t be feared. Here’s why we can embrace the power of technology.

Artificial intelligence (AI) is everywhere. AI recommends movies and restaurant choices, prevents cars from crashing, books flights, tracks taxis, identifies financial fraud and creates playlists to work out to. In the 1950s, AI was defined as machines operating in ways that were regarded as “intelligent”, or equal to tasks performed by humans. Since then, computer use and data generation have increased enormously, with current estimates of 2.5 quintillion bytes being produced every day.

Hands of a woman using a smartphone.

Much of this data is output, or information, collected from daily use of mobile phones, social media and the Internet. This information is commonly known as “big data” and is where AI steps in to help. AI uses machine learning to analyse this data in real time at a speed and volume no human ever could. Not surprisingly, the private sector has embraced AI and increasingly uses it to gain more accurate information on purchasing behaviour, financial transactions, logistics and predicting future trends.

The United Nations recognizes the power of AI and is working with the private sector on “data philanthropy” so information such as surveys, statistics and consumer profiles can be used for public good. For example, researchers are using satellites and remote sensors with AI technology to predict extreme weather events that affect agriculture and food production in developing countries.

With this in mind, ISO – in conjunction with its sister organization, the International Electrotechnical Commission (IEC) – has identified the need to develop standards for AI that can benefit all societies. The ISO/IEC JTC 1/SC 42 subcommittee for artificial intelligence was established two years ago and has already published three standards relating to big data, with 13 other projects in development. Chaired by business and technology strategist Wael William Diab, it will develop and implement a standardization programme on AI to provide guidance for other ISO committees developing AI applications.

Setting boundaries

SC 42 has a broad scope of AI development that includes basic terminology and definitions, risk management, bias and trustworthiness in AI systems, robustness of neural networks, machine-learning systems and an overview of ethical and societal concerns. Twenty-seven member countries are participating in this programme with another 13 countries observing. Ray Walshe, Assistant Professor of ICT Standardization at Dublin City University, Mr Wo Chang, Digital Data Advisor for the Information Technology Laboratory (ITL) of the National Institute of Standards and Technology (NIST) in the United States, and Dr Tarek Besold, Scientific Advisor of Neurocat in Berlin and Chief Behavioural Officer (CBO) at Telefonica Innovation Alpha Health in Barcelona, are three key members of this committee. Do they identify with Peter Parker when he became Spiderman? With great power comes great responsibility.

Industrial robotic arm picks cardboard boxes off a conveyor belt in a warehouse.

Dr Besold isn’t daunted. “AI is a new and fast-changing field, full of innovators and disruptors. We need to define the state-of-the-art and common-sense definitions of AI mechanisms and technologies. Yes, developing norms and standards is a big task and interoperability is vital because AI is so far-reaching. AI is part of many futures as a tool rather than the leader.”

SC 42 is “building from the ground up,” says Chang. “We provide interoperable frameworks and performance tools in the form of standards on AI and big data, which can then be shared with government and private enterprise. These frameworks set the AI ‘boundary conditions’ that can be defined using probabilities to determine the risk factors. Not just boundaries, but a safety net that uses risk management in implementing them.”

It remains up to governments around the world to decide what they regulate. Ray Walshe says that “the public needs to recognize that there is a difference between standardization, legislation and regulation. Ninety percent of the world’s data has been generated in only the past two years. This is an incredible mountain of both structured and unstructured data to be stored, aggregated, searched and correlated for the myriad of businesses, governments and researchers who provide tools and services. Governments and private industry will often use International Standards as a reference to regulation, to ensure that industry, societal safety and ethical concerns are met”.

Tricking AI

Safety of data and how it is used remains a concern in society, especially when the dreaded “computer error” is mentioned. Mathematics emerges as the crucial ingredient. Dr Besold says AI programs play a “numbers game”, with researchers generating attacks and defences on AI systems, trying to “trick them” and developing solutions to the problems they discover.

AI focuses on high specificity, which means that it’s tailored to a specific task, Besold says. “AI takes away the time-consuming and boring programming from people, but it still needs rules and measures that are set by humans. If you apply safety boundaries to the self-driving car, it’s obvious that this technology needs safeguards and standard definitions. Is it an acceptable risk to run over an elderly person or a small child? Neither is acceptable, of course, and we want to help governments and industries accept and use the measures we recommend.”

Photo collage of a low-battery warning in an electric car and an aerial view of a traffic jam.

“Probability in risk assessment is the key word,” Wo Chang agrees, and he uses cats as a rather powerful example: “If you take image recognition, you’ll see that an effective system will highlight an error if the program has not experienced it before and shut down. The system has been given millions of pictures of cats and dogs so that its ability to differentiate between them is fine-tuned. The system has been trained under well-defined conditions, but it’s impossible to model for everything. What happens if it comes across a cat wearing a bow tie? It shows that if one part of a picture is changed, the outcomes can be very different. This could be a ‘bug’ (or a bow-tie-wearing cat) that does not meet the trained environment and system function and puts in a safety constraint to avoid failures. If applied to more serious applications, then thorough testing can determine probabilities and shut down the system to prevent more catastrophic decisions or failures.”

Trust your data

With use of AI in potentially sensitive areas such as healthcare, surveillance and banking, there remains the risk that human bias affects the data used. Dr Besold acknowledges this. “There is bias in AI, but we can agree on a standard definition to address this bias. Regulators may accept that a 5/10 bias is acceptable for soap dispensers but certainly not when it comes to self-driving cars.”

In the medical field, he says, government and society need to decide if we are OK in a validated world. Are we OK with using data that’s mostly from the first world, for the first world, in the first world? Do regulators accept that the data can only be applied to these people or insist that it has to work for everyone in the world but will be statistically less accurate?

“Look at organ transplants. AI could potentially have access to all available medical records across the world and apply an enormous range of measures to determine which person gets to the top of the list, ensuring less rejection of transplanted organs and much better medical outcomes. However, if you are on a transplant list and realize that other people are receiving organs ahead of you, are you willing to accept the data used to make that decision?”

Trustworthiness is vital. The committee and researchers in the field need to look at how other fields such as medical and automotive apply measures and earn this trust by government and wider society.

Emerging machine learning is starting to look at the more pressing needs of the developing world, according to Wo Chang. “In Africa, access to energy is a big problem in rural areas. With a large uptake of smartphones there, apps are being developed that can diagnose basic medical problems in remote clinics, provide preliminary data such as weather forecasts, soil quality and agricultural tips.”

Fears and phobias

Despite these advances, much of the general public fears AI as a scary development, imagining robots becoming Schwarzenegger-like “terminators” replacing human beings. “This won’t happen in my lifetime,” Ray Walshe says. “Don’t get me wrong, AI is a game changer and is capable of doing very precise jobs very fast. This is impressive and generates huge cost savings, but it’s known as ‘narrow intelligence’. The human brain is capable of doing that ‘narrow’ task but also thousands of other ‘broader’ and more complex tasks.” Robotics is one of the most exciting areas for AI development, but the myth that machines will be capable of artificial general intelligence like “Terminator” will not happen in the foreseeable future.

Engineer works with a HoLoLens headset to place a virtual robotic arm into the production line.

“AI is still more of a promise than an achieved feat,” agrees Dr Besold. “The research side is progressing faster than the application side. Robotic arms in factories can only do what they are programmed for and there’s no ’intelligence’ in this. If a change is needed, such as working on the other side of the car, it requires a change in programming that involves a human being.”

Dr Besold says that AI developers need to engage more with society to provide transparency, and Chang sees that standards developed by the committee to address system robustness, data quality and boundaries will increase trust and the ability to interact with a variety of data repositories.

All three committee members see jobs changing rather than disappearing. AI will perform more manual work and routine tasks such as standard contracts and documents, giving people more time to concentrate on skills involving empathy, “bedside manner” in medical treatment, ethical matters and lateral thinking. Opportunities for re-education and to work on more challenging and interesting situations will arise.

“How ironic if increased use of AI in workplaces resulted in reviving union movements,” Dr Besold says. “If you’re at a school or hospital, then using AI for logistics or declarative knowledge such as facts, dates and figures may result in less staff time per week. Do governments and employers fire some staff or do they negotiate a shorter work week for a more balanced life? This is where consensus is needed: what’s the biggest benefit to society?”

New horizons

Future trends and benefits for AI will see more hands-free applications according to Wo Chang. “Wearing smart glasses will enable users to look at something like a broken washing machine and get information on what is wrong, where the problem is located and how to fix it. For tourism, you’ll be able to look at a building and find out the history, function and services it still provides while you are standing in front of it.”

Woman with a wearable computer in the form of smart glasses.

Smart glasses aside, Chang has loftier hopes. “When government and businesses keep their citizens and customers at the forefront and learn how to leverage the best of AI and their people, it will be a bright future indeed.”

Ray Walshe has a personal interest in seeing how AI can be used to help in reaching the objectives outlined in the United Nations Sustainable Development Goals, a universal call to action to ensure peace and prosperity for mankind. “How can AI be used to help alleviate poverty worldwide, hunger and malnutrition, for better water and sanitation, equal opportunities in education, work and gender, and to accelerate development in developing nations? These are major challenges that require disruptive and game-changing technologies and expert collaboration on a global scale.”

We need to do more than put cat ears on friends’ social media selfies, Dr Besold says. “My hope for the future is that actual applications of AI will result in more effort being put into logistics that help in the field of medicine, agriculture, climate change and scientific discovery – important applications that will benefit society.”

Seems like the ISO/IEC JTC 1/SC 42 subcommittee for artificial intelligence will be busy.

By |2019-11-11T07:57:10+00:00November 11th, 2019|Weld Engineering Services|Comments Off on Embracing the power of technology

In the Lab: Alternative Recycling Process for Lithium-ion Batteries: Molten Salt Approach

Home > Journal Archive > In the Lab: Alternative Recycling Process for Lithium-ion Batteries: Molten Salt Approach

Johnson Matthey Technol. Rev., 2020, 64, (1), 16

Ruth Carvajal-Ortiz’s current research is centred around innovation in energy storage. She has a special focus on the characterisation of materials, molten salts and their potential applications to several industrial processes, such as metal production and recovery. Currently a research fellow at Coventry University, UK, Ruth is in charge of the molten salts recycling work package of the Custom Automotive Lithium Ion Battery REcycling (CALIBRE) consortium, a circular economy project for automotive lithium-ion batteries (LIBs) funded by Innovate UK and led by Johnson Matthey.

Ruth’s academic and industrial background includes electrochemical characterisation techniques (such as voltammetry), corrosion in metals under hydrothermal conditions and synthesis and characterisation of catalysts. She obtained her doctoral degree from the University of Manchester, UK, where she worked designing and testing an in situ molten salt electrochemical oxidation cell to measure hydrogen diffusion in zirconium alloys (1). Prior to her doctoral studies, Ruth worked with corrosion in metals involved in nuclear applications, during her MSc and as a research chemist at Trent University in Peterborough, Canada. The main project at Trent University’s supercritical water laboratory was part of generation IV (GEN-IV) nuclear reactor investigations. The aim of the project was to understand the corrosion behaviour of stainless steel in hydrothermal and supercritical conditions (24). The project included collaborations with the Canadian Nuclear Society in Chalk River, Ontario. Ruth’s background also includes synthesis and characterisation of catalysts such as titania, used in biofuels.

Ruth Carvajal-Ortiz

  • Position: Research Fellow in Electrochemical Engineering

  • Department: Institute for Future Transport and Cities (IFTC) Centre for Advanced Low-Carbon Propulsion Systems (C-ALPS)

  • University: Coventry University

  • Address: Puma Way, Coventry

  • Post Code: CV1 2TL

  • Country: UK

  • Email: ruth.carvajalortiz@coventry.ac.uk

About the Research

The demand for LIBs has substantially increased during recent years and is forecast to grow almost 66% globally by 2025 for electric vehicles alone (5). This increases the need for an efficient and sustainable recycling process and circular economy system (6). Coventry University and several battery and automotive companies are working together on a project to provide an achievable and effective way to recycle LIBs. CALIBRE is a new consortium that covers several stages (Figure 1), from ageing and end-of-life (EoL) assessment to chemical or molten salt recycling and materials regeneration. Additionally, the project includes a mechanical separation and material recovery process at pilot scale, reuse and life cycle assessment.

Fig. 1.

LIB recycling circular economy project, CALIBRE scheme

LIB recycling circular economy project, CALIBRE scheme

A molten salt recycling process is part of the chemical recycling package. This process provides a novel approach that uses common molten salts as electrolytes and reaction media. The main advantage of the molten salts is their performance versatility which, given the multiple choices of battery electrode chemistries that are presently in the market, provide an improvement over current methods.

For the study, different eutectic mixtures of molten salts (7, 8) (for example sodium, potassium, lithium and calcium borates and chlorides, sodium and potassium carbonates) are tested to provide an optimised alternative or a shortcut to the hydrometallurgical, pyrometallurgical or even biometallurgical recovery of metals (such as cobalt, nickel and manganese). This approach takes advantage of the salts’ electrochemical and solubility properties. Initially, a two-phase molten salt system composed of sodium borate and sodium chloride was employed to evaluate the feasibility of metal recovery from mixed feeds of oxides of cobalt, manganese, copper and nickel mixtures and virgin cathode materials (for example nickel manganese cobalt (NMC) 111) by electrodeposition. The process operates within a temperature range of 800–900°C, where both salts are in liquid state. Amietszajew et al. reported 98–99% metal purity for single metal oxides deposited using the process described (Figure 2) (9, 10).

The system has demonstrated stability and could be used together with other metal recycling sources and processes. Additional insight into the environmental impact of the pilot scale process such as its carbon footprint and its efficiency are also being assessed. The new method might solve some of the issues related to the hydrometallurgical methods currently used by the recycling industry, including significant water waste, sulfate byproducts and toxic acids that are detrimental to the environment. Furthermore, the method developed is inclusive of a range of metals, which is of high importance considering the growing and future complexity of the battery waste stream and the need for the world to recycle essential materials, while at the same time reducing pollutants and greenhouse gas emissions.

Fig. 2.

Molten salt electrochemical cell scheme showing cathode and anode reactions during the process of metal deposition. Working electrode (WE) = half-cell where reaction occurs = cathode. Reference electrode (RE) = evolution of chlorine (oxidation) = anode

Molten salt electrochemical cell scheme showing cathode and anode reactions during the process of metal deposition. Working electrode (WE) = half-cell where reaction occurs = cathode. Reference electrode (RE) = evolution of chlorine (oxidation) = anode

By |2019-11-06T15:31:59+00:00November 6th, 2019|Weld Engineering Services|Comments Off on In the Lab: Alternative Recycling Process for Lithium-ion Batteries: Molten Salt Approach

New International Standard for verification bodies just published

The validation or verification of information declared in claims is a key way of demonstrating that what is said is reliable and true. But only if those performing this confirmation are doing it correctly. A new ISO and IEC standard just published will ensure the validators and verifiers are competent, so everyone can have confidence in the claims.

ISO/IEC 17029, Conformity assessment – General principles and requirements for validation and verification bodies, contains general principles and requirements for the impartial, competent and consistent provision of validation and verification activities by the assessment bodies performing them.

The newly published International Standard is useful for organizations in any sector, providing assurance that claims are either plausible when it comes to the intended use (validation) or correctly stated (verification). It is designed to be applied in conjunction with existing sector-specific schemes.

As a framework for validation and verification activities, it provides the general requirements to which new sector-specific standards can refer, such as the upcoming ISO 14065, Environmental information – Requirements for bodies validating and verifying environmental information, due to be published in 2020. The two standards, therefore, will go hand in hand.

ISO/IEC 17029 is expected to form the basis of many more sector application standards across a range of industries that will benefit from its general requirements.

Dr Stefanie Vehring, Convenor of the working group that developed the standard, said validation and verification according to ISO/IEC 17029 are assessments that apply to declared information such as claims or declarations.

“ISO/IEC 17029 complements the established conformity assessment tools by fitting in between inspection and certification,” she said.

“It provides a conformity assessment approach where the information itself serves as the object of assessment and confirmation of this declared information is sought.”

ISO/IEC 17029 is the latest in a series of standards designed for the assessment and recognition of those performing conformity assessment activities and was developed by ISO’s Committee on conformity assessment (CASCO). Many of these are published jointly by ISO and its standardization partner, the International Electrotechnical Commission (IEC).

Together, these standards make up the CASCO Toolbox. Developed with input from stakeholders all over the world, the toolbox includes the contribution of the International Accreditation Forum (IAF) and the International Laboratory Accreditation Cooperation (ILAC), two key ISO partners.

ISO/IEC 17029 can be purchased from your national ISO member or through the ISO Store.

CASCO is the ISO committee that develops policy and publishes standards related to conformity assessment.
By |2019-11-04T09:00:36+00:00November 4th, 2019|Weld Engineering Services|Comments Off on New International Standard for verification bodies just published
Go to Top