Add Thesis

Artificial Intelligence and its Breakthrough in the Nordics

A Study of the Relationship Between AI Usage and Financial Performance in the Nordic Market

Written by F. Ottosson, M. Westling

Paper category

Master Thesis


Business Administration>Finance




Master Thesis: The history and future of AI The concept of AI has existed for more than half a century and was first proposed in 1956 by John McCarthy. However, it is only in the last few decades that technology has advanced to such a degree that it can realize the idea of ​​developing artificial intelligence technology. In addition, the technology is expected to grow exponentially in the near future. To give a point of view, the computing power of computers built today is about 1 billion times that of computers built in the 1970s (Bini, 2018, p. 2359). Figure 2 below further illustrates this development, where it can be seen that artificial intelligence has flourished in the last decade due to advances in deep learning. As can be seen from Figure 2, AI has two subgroups, namely machine learning and deep learning. Artificial intelligence as a general concept was only a dream in the 1950s. The pioneers proposed the idea of ​​a machine that can be programmed like a human (Copeland, 2016). The earliest form of artificial intelligence was the program of the "checkers" game. Its working principle is to program all possible action choices into the algorithm, and let the software choose the best action plan for each action (Bini, 2018, page 2359). In its most basic form, the program can act as a human being playing a game of checkers. However, this is far from the artificial intelligence we think of today, which originated from a subgroup of artificial intelligence called machine learning. The special thing about machine learning is that the algorithm learns from past experience and can be improved by itself through simple learning by doing. It works by providing data input to the software as a means of teaching it about specific things, such as which specific attributes belong to a specific kind of eye iris. By using machine learning, the software is able to distinguish these attributes accordingly. However, due to the learning-by-doing process, the program is also able to classify sudden iris input that it has never seen before (Bini, 2018, p. 2359). Due to past experience, the software learns to make predictions about the world it is experiencing (Copeland, 2016). Simon et al. (2018, p. 22) It further explains how machine learning performs two different types of tasks. The first one is the so-called "supervised machine learning", which illustrates in the aforementioned scenario that the machine can practice on a set of examples and then accurately analyze new data. The second is "unsupervised machine learning", which is different from the first example because the task is unsupervised, so learning examples are not included. Therefore, the software will find patterns in the input data and the relationship between different inputs (Simon et al., 2015, p. 22). 2.2 Challenges of artificial intelligence 2.2.1 Cyber ​​risk new technologies bring many opportunities for stakeholders and can contribute to social progress and economic growth. However, these potentials must also be offset by related risks. Some of the negative effects of artificial intelligence are that it can be used to incentivize dislocation, adversarial geopolitics, or malicious intentions. For example, in the case where the technology is used for email fraud or online welfare (Floridi et al., 2018, p. 291). Other risks may be the manipulation of simulated markets driven by artificial intelligence. The unified term for these criminal risks is artificial intelligence crime (AIC) (King et al., 2018, p. 89). Artificial intelligence also has risks in the form of social influence, which can both promote and reduce human prosperity, which raises ethical issues related to social influence. Due to the risk of overuse or misuse, the true potential of artificial intelligence is at risk of not being used (Floridi et al, 2018, p. 291). New technologies lead to new risks. One example is cyber weapons, which must be considered and counteracted. Cyber ​​weapons pose a huge risk because they are easy to use, have low entry costs, and have a great chance of successfully maintaining authority. In addition, cyber risks may endanger the stability and security of society. The new risks and the conflicts they bring are different from the traditional risks of violence (Taddeo, 2018, p. 339). Therefore, government and non-government actors, policy authorities and military strategies emphasize the need for cyber intimidation as a new strategy to prevent international stability (EU, 2014; International Security Advisory Committee, 2014; United Nations Institute for Disarmament Research, 2014; UK government, 2014 ; European Union 2015). AIC and related research are distributed in many fields, such as; social law, computer science, psychology and robotics, etc. Because artificial intelligence is relatively new, there is a lack of research and solutions on potential criminal activities (King et al., 2018, p. 90). The king and others. (2018 p. 90) It also provides a literature analysis in the field of AIC. In the article, two research experiments are used to provide evidence that AIC may be involved in crime. Seymour & Tully (2016) conducted an experiment on social media. It is done by sending a large number of messages and persuading people to click on a link, where each unique link collects information about each user's previous activities and personalized information, hiding the true intention behind the message. This information can be used for fraud and illegal activities. Martínez-Miranda et al. conducted another experiment. (2016). It was discovered that trading agents can perform manipulated activities based on personal data, including false orders. Read Less