Progress in the field of Artificial Intelligence is opening up new opportunities for policymakers and administrators. Which measures must political leaders now implement in order to maximise the positive effects and minimise the risks? Who will regulate the codes and algorithms that will hold sway over our lives in the future? Who will assume responsibility? At the invitation of the Austrian Federal Ministry for Education, Science and Research (BMBWF) and the Austrian Science Fund FWF, international experts came together to discuss the latest developments and consequences at this year’s Alpbach Technology Symposium.
The experts soon found common ground, namely citing an urgent need for collective reflection on artificial intelligence (AI) and possible approaches toward regulating it. After all AI, much more than almost any other technical advancement, will entail far-reaching changes in all areas of society.
Weighing the opportunities and risks of this development was the central tenor of the debate. Like electricity or the Internet, AI technology can be used in virtually any area of life, making it all the more impactful. The discussion focused in particular on the trend toward delegating the human decision-making process to algorithms. Social scientist Jack Stilgoe from University College London advocated refraining from relying exclusively on algorithms, as this would entail the long-term risk of undervaluing human capacities and relegating these to the margins. Artificial intelligence was not developed with the aim of enabling independent decision-making, but rather to only lay the groundwork for decisions, added Sepp Hochreiter, head of the AI Lab at Johannes Kepler University Linz. In his view, humans must always constitute the final decision-making authority. Tim O'Brien, General Manager AI Programs at Microsoft, added that algorithms are basically just mathematics, whereas the real challenge lies in improving the quality of the underlying data and subsequently developing principles governing the responsible use of new AI technologies.
The experts also touched on AI applications that are already in extensive use, such as automatic facial recognition. Meredith Broussard, professor of data journalism at New York University, criticised what she termed the prevailing “techno-chauvinism”, referring to the idea that new technologies are the solution to all our problems. In her statement, she addressed in particular the unconscious assumptions that AI developers make when designing algorithms. More diversity within these teams is urgently needed to reduce discriminatory distortions in algorithms such as those found in facial recognition. Going forward, policymakers must devote much more attention to preventing discrimination. Picking up on that thread, philosopher Paula Boddington of Cardiff University emphasised that we as a society must always question the extent to which artificial intelligence promotes or possibly restricts people in their independent actions.
Patrice Chazerand from the European digital industry’s trade association DIGITALEUROPE, highlighted the digital economy’s efforts towards self-regulation and advocated the establishment of ethical guidelines. He called for a legal framework that guarantees human rights, self-determination and data protection in artificial intelligence, a call that garnered support from other participants as well.