Skip to content
About FNR
Funding process & policies
News, Media & Events

Luxembourg National Research Fund

Artifical Intelligence (AI) – in the service of mankind


[vc_row][vc_column][vc_column_text]BACK TO RESEARCH WITH IMPACT: FNR HIGHLIGHTS[/vc_column_text][/vc_column][/vc_row][vc_row][vc_column width=”3/4″][vc_column_text]

AI has arrived in our daily lives. What can and what can AI not do? Which societal problems does AI pose? An opinion piece by FNR Secretary General Marc Schiltz.

Artificial Intelligence – a definition everyone knows by now. What is it? In a nutshell: AI is a generation of computer programmes with the ability to imitate intelligent human behaviour.

Example: Google’s computer programme Alpha Go Zero has learned how to master the ancient Chinese board game Go – even defeating the human world champion. The programme has learnt how to win by playing against itself thousands of times.

In the domain of medicine, AI can already analyse scans, x-rays and other data – putting together a diagnosis for certain types of cancer that is more reliable than a diagnosis from an experienced doctor.

Not to forget virtual assistants on smartphones, the likes of Siri and Alexa, which learn to understand and talk with the user. In the future, they will learn to get to know the user even better – their wishes, habits and preferences. AI has by now reached a point where it can recognise and adapt to emotions.

So are humans, with their imperfect intelligence, running the risk of being dominated by AI? Most likely not anytime soon. While current AI programmes often to a better job than humans, they can only do so in highly specific areas: Alpha Go is a champion in the game Go, but cannot do anything else. The most extraordinary aspect about the human brain is that it can tackle such a wide range of problems.

Human intelligence can also distinguish between cause and effect – it appears AI is not yet able to do this. As Judea Pearl – who championed the probabilistic approach to artificial intelligence – said: “Today’s machine learning programs can’t tell whether a crowing rooster makes the sun rise, or the other way around.”

AI brings with it a range of societal problems. There are jobs that will likely not exist, or not in their current form, in the future, such as chauffeurs, travel agents, accountants, but some jobs in medicine could also be affected.

There are also ethical challenges. AI is not completely transparent, as these programmes base their decisions on what they have „learned“. Who controls this? How can it be prevented that this becomes too one-sided? And in the end, how can AI be prevented from manipulating us? “Fake news” should not be followed by “Fake intelligence”.

An intense exchange between science, industry, politics and society is needed in order to develop AI in a way that always puts the well-being of humans first.

This opinion piece was originally published as a ‘Carte Blanche’ on in May 2018 (in Luxembourgish)[/vc_column_text][/vc_column][vc_column width=”1/4″ css=”.vc_custom_1549297851219{background-color: rgba(138,184,190,0.16) !important;*background-color: rgb(138,184,190) !important;}”][vc_single_image image=”57442″ img_size=”medium” add_caption=”yes” onclick=”link_image”][/vc_column][/vc_row][vc_row][vc_column width=”3/4″ css=”.vc_custom_1549297996493{background-color: rgba(138,184,190,0.16) !important;*background-color: rgb(138,184,190) !important;}”][vc_column_text]

More opinion pieces

[/vc_column_text][vc_basic_grid post_type=”features” max_items=”-1″ style=”load-more” items_per_page=”3″ item=”23094″ grid_id=”vc_gid:1557511987960-7e28ddcd-57e0-5″ taxonomies=”162″][/vc_column][vc_column width=”1/4″][/vc_column][/vc_row]