When discussing the ethical handling with artificial intelligence, it is central to consider both the intent (= what the AI is used for) and the realization and implementation of the technology (= how AI is developed). The type of AI used, whether it is deep learning neural networks or simple rule-based logic, is less important than the use of the AI itself.
Regulation should focus on definitional issues around acceptable uses, such as developing better vaccines, and unacceptable uses, such as using AI-generated media to subtly manipulate humans.
The coming AI Act of the EU will look in detail at which applications are not permitted and which applications are considered particularly risky. Different risk classes will be defined, each with different requirements. AI applications that contradict the ethics of the EU principles and pose an unacceptable risk are to be banned completely.
In the development of AI Solutions and its implementation, we should first analyze which ethical and moral aspects need to be taken into account. This includes the question of how we can deal with potential "data bias" and ensure equal opportunities, for example. This also includes identifying and taking into account potential pitfalls such as a lack of perspectives.
While the type of AI is less relevant in terms of "what", it certainly plays a role in the approach ("how"). Considerations for creating transparency are essential so that people understand what to expect from the AI they are dealing with. This is particularly significant in high-risk areas such as law enforcement or healthcare, as also expressed in the AI Act.
We cannot separate the discussions about intention and development, i.e. about the what for and the how in AI. This is shown especially by the latest developments in the field of "General Purpose AI (GPAI).".
GPAI refers to an AI system capable of performing general-purpose functions such as image and speech recognition, audio and video generation, pattern recognition, question answering, translation, etc.
In doing so, it can have multiple purposes, both intended and unintended. In particular, we have recently observed many applications that use generative AI tools such as ChatGPT or Midjourney. We need all the more transparency here about how these models work and how they are used.
On the one hand, this includes transparency about the basic models themselves. Developers and regulators need this transparency to adequately assess the use and risks of AI. On the other hand, this includes transparency about how they are used. This is relevant for users to understand what risks they are taking and how to assess results.
Artificial intelligence has a enormous potential in various economic areas. These range from the development of new drugs to AI-driven enzymes for decomposing plastic waste to autonomous trucking from hub to hub. Generative AI - AI capable of generating text or images on demand - has expanded the impact of AI into new areas such as creativity.
While we are still discovering the potential of Industrial AI, there are already concrete uses for generative AI, such as increasing productivity through the use of assistive AI and enabling hyper-personalized interactions.
When we talk about manufacturing in particular, it's easy to list the potential uses of AI, for example in smart factories. It can improve operational efficiency and optimize supply chains such as warehousing and staffing.
However, due to the speed at which new technologies and paradigms are currently being introduced, it is extremely difficult to establish a concrete value in general for the economy or the manufacturing industry.
We are currently experiencing a tremendous acceleration in AI transformation with the introduction of new AI technologies. This means, right now, we need to pay attention to identifying and mitigating Risks pay special attention to. There are three levels in particular to consider:
However, the necessary regulation must not stifle innovation. Regulations should only provide guidelines for doing and not doing in order to prevent risks and strengthen confidence in the AI solutions developed. At the same time, they should outline a framework within which AI innovations can flourish. We therefore need appropriate and clear regulation around artificial intelligence that both provides security and allows for innovation potential.
Saara Hyvönen is one of the three co-founders of DAIN Studios, the leading AI consultancy with offices in four countries in Europe. She has a PhD in mathematics and extensive experience in applying data science in both academic and business environments, such as a postdoctoral researcher in data science at the University of Helsinki and as head of global CRM analytics at Nokia.
Her specialty is data and AI strategy development, identifying optimal data use cases, and defining related data, architecture, and compliance requirements. She seeks answers to the full range of what, why, and how questions. In 2021, she was named among the 100 Brilliant Women in AI Ethics.
"I love making data work!"
Also of interest