AddToAny

Artificial Intelligence and Data Regulations

Two terms that quickly come up in any discussion about Fintech are big data and Artificial Intelligence. And with good reason.
Due to recent advancements in data storage, collection and processing, combined with new applications of AI algorithms, applications such as algorithmic trading, chatbots, voice recognition and automatic translation have gone from science fiction to an app in your pocket. While such technological advances have opened up a sea of opportunities, they also bring with them some challenges. And one of the biggest challenges faced by many firms in the financial sector is the increasing threat that international tech giants like Google, Facebook and Alibaba, which possess both really big data and superior capabilities in analysing them, have set their sights on markets in the European financial sector (Angelshaug, Knudsen and Saebi, 2019).
One popular narrative concerning this threat is that the stricter European data protection laws create a disadvantage for European firms vis-à-vis the international tech giants. The reason is that European companies will have more restrictions on the data they can use and what they can use them for, which will ultimately hamper their ability to train AI algorithms and develop AI-driven business models that stand up to competition. While this narrative seems intuitively plausible, there is also an alternative, more hopeful story that could be told. This alternative narrative is that stricter European data protection laws can actually prove to be an advantage for European firms. The logic here is that stricter European data regulations will force European companies to develop capabilities, algorithms and business models that will differ from those developed by technology giants that mainly operate in markets with looser data regulations. And if core capabilities, algorithms and business models needed to effectively compete in Europe are distinctively different from those needed in other markets, this will raise the entry barriers. In essence, this will buy European companies time to build advantages that are harder for international newcomers to overtake.
The purpose of this paper is to look more closely at this alternative narrative, and discuss whether and to what degree the stricter European data regulations might actually be beneficial for European companies in their anticipated battle with the American and Chinese tech giants. To address this issue, we first give a brief introduction to data theory, before providing a general introduction to AI and its history. Then we give an overview of some key aspects of European data protection laws and the potential restrictions they pose on AI-powered services and business models, followed by an argumentation for why we believe the tighter EU data regulations might be good news for European companies.

BIG DATA AND AI
WHAT ARE DATA?

Data refers to facts and statistics collected for reference or analysis (Oxford English Dictionary, 2019). According to classic information theory, data are the raw material used to generate information and knowledge (Rowley, 2007). Individual data points begin to generate meaningful insights - information - when they are combined in ways that make sense semantically. For example, the four words hot, cat, dog, eats tell us next to nothing, but if we use them to form the phrase
“cat eats hot dog", the data points are combined in a meaningful way and we have generated information. Furthermore, when information is logically connected to other chunks of information (e.g. other phrases and sentences), we are approaching knowledge, which can provide valuable input for decisions.
In the early phases of digitalisation, data did not play a dominant role in many business models, for the simple reason that there were technical limitations on computing power, data storage and data processing.
In the late 90s, the technical limitations associated with data slowly started to evaporate. Improved tools for collecting, structuring, storing, retrieving and analysing data opened up a multitude of new applications, fuelling the rise of Web 2.0 (O'Reilly, 2007) and the advent of e-business and e-commerce (Chaffey, 2007). However, the increasing volume, velocity and variety of data that had started to become available also created a set of new challenges for firms.
One challenge in particular was that a large share of potentially useful data remained of limited use. The reason was that much of the “new" types of data that were being produced were unstructured, meaning that existing data processing paradigms and corresponding tools were unable to exploit it to generate information and knowledge (Beath et al., 2012; Feld-man and Sanger, 2007). Accumulation and processing of unstructured data was contingent on further technological advancements in computational power and analytical tools, such as big data analysis techniques and what we today refer to as deep learning AI models, which are currently considered the most promising machine learning approach (Brynjolfsson and McAfee, 2012; McAfee and Brynjolfsson, 2013; Witten et al., 2016).

A HISTORY OF AI
While many think of AI as a fairly recent phenomenon, its origins can be dated back to the mid-20 century. Originally, AI was thought to be a straightforward application of computer power, a software engineering project like any other. Famously, the MIT professors Seymour Papert and Marvin Lee Minsky assigned image recognition as a summer programming project for undergraduates in 1966 (Papert, 1966).
Once researchers came to understand that AI was a hard problem, they struggled to make progress. While the early years of AI research (1956-1966) were characterised by rapid progress and high levels of enthusiasm, developments experienced a bumpier ride over the next 30 years. Researchers experienced regular setbacks, which led to so-called AI winters, longer periods when funding and interest in AI research suffered considerable blows.

1950S-1990: SYMBOLIC AI
For much of the period from 1956 to 1993, the leading paradigm in AI research was known as symbolic AI. The idea here was to first reduce human cognition to the manipulation of symbols, and then implement that symbolic manipulation on a computer. The inspiration was formal logic, which captured a portion of conscious reasoning that dated all the way back to Euclid and Aristotle. The appeal of symbolic AI is that conscious reasoning can be explained. The computer could not only decide, but also explain how it made that decision. Symbolic AI also had intellectual appeal because of its connections with mathematics. The approach was so dominant that it was nicknamed GOFAI, for Good Old-Fashioned AI (Luger, 2005; Walmsley, 2012).
A simple symbolic approach to automatic translation would be to look up each word in a bilingual dictionary, but as anyone who has ever learned a foreign language knows, the correct translation can depend on context. One source of context is grammatical - a noun i
Gå til mediet

Flere saker fra Magma

Magma 28.02.2024
Det er det store spørsmålet etter rapporten fra Klimautvalget 2050. Utvalget anbefaler full stans i leting etter olje og gass i nye områder. Men det blir ikke dagens regjering som vedtar en solnedgangsmelding for norsk petroleumssektor.
Magma 28.02.2024
Vinylplatene har for lengst gjenvunnet hylleplass i stua hos musikkelskerne. Nå er også CD-platene på vei tilbake inn i varmen, takket være Christer Falck og hans folkefinansiering.
Magma 28.02.2024
Digitaliseringen i offentlig sektor fører ofte med seg store IT-systemer som utvikles og driftes av mange team, ofte ved å benytte smidige utviklingsmetoder. I slike storskala settinger er god koordinering avgjørende på grunn av avhengigheter mellom teamene som kan senke farten og kvaliteten på leveransene.
Magma 28.02.2024
I dag gjennomføres stadig mer av IT-utviklingen i offentlig sektor ved hjelp av smidige (agile) metoder.
Magma 28.02.2024

Nyhetsbrev

Lag ditt eget nyhetsbrev:

magazines-image

Mer om mediene i Fagpressen

advokatbladet agenda-316 allergi-i-praksis appell arbeidsmanden arkitektnytt arkitektur-n astmaallergi automatisering baker-og-konditor barnehageno batmagasinet bedre-skole bioingenioren bistandsaktuelt blikkenslageren bobilverden bok-og-bibliotek bondebladet buskap byggfakta dagligvarehandelen demens-alderspsykiatri den-norske-tannlegeforenings-tidende diabetes diabetesforum din-horsel energiteknikk fagbladet farmasiliv finansfokus fjell-og-vidde fontene fontene-forskning forskerforum forskningno forskningsetikk forste-steg fotterapeuten fri-tanke frifagbevegelse fysioterapeuten gravplassen handikapnytt helsefagarbeideren hk-nytt hold-pusten HRRnett hus-bolig i-skolen jakt-fiske journalisten juristkontakt khrono kilden-kjonnsforskningno kjokkenskriveren kjottbransjen kommunal-rapport Kontekst lo-aktuelt lo-finans lo-ingenior magasinet-for-fagorganiserte magma medier24 museumsnytt natur-miljo nbs-nytt nettverk nff-magasinet njf-magasinet nnn-arbeideren norsk-landbruk norsk-skogbruk ntl-magasinet optikeren parat parat-stat politiforum posthornet psykisk-helse religionerno ren-mat samferdsel seilmagasinet seniorpolitikkno sikkerhet skog skolelederen sykepleien synkron tannhelsesekreteren Tidsskrift for Norsk psykologforening traktor transit-magasin transportarbeideren uniforum universitetsavisa utdanning vare-veger vvs-aktuelt