How Greece used AI to detect asymptomatic travelers infected with COVID-19
29
September
2021
No ratings yet.
A few months after the Covid-19 outbreak, operations researcher Kimon Drakopoulos, who works in data science at the University of Southern California, offered to help the Greek government by developing a system that uses machine learning in order to determine which travelers had the most risk of being infected and thus should get tested. Greece was asked by the European Union to allow non-essential travel again, but of course the option of testing all travelers was not available. Consequently, they chose to implement a more efficient way to test incoming travelers than the usual practices of randomized sample testing or testing based on the visitor’s country of origin, by launching this system called ‘Eva’ and deploying it across all Greek borders.
Drakopoulos and his colleagues discovered that machine learning proved to be more effective at identifying asymptomatic cases than the aforementioned methods, by a factor of two to four times during peak tourist season. This was accomplished because Eva used multiple sources of data, besides just travel history, to assess and estimate the infection risk of an individual. These sources include demographic data like the age and sex of the travelers, which was then paired with the obtained data from previously tested passengers, to calculate who had the highest risk out of a group and needed to be tested. This process was also used to provide information to the border policies about real-time estimates of the prevalence of COVID-19.
When the researchers compared the performance of this model against the methods that only use epidemiological metrics, such as random testing, it was clear that it performed better in all aspects. One main reason for this was the limited predictive value that these metrics possessed in relation to asymptomatic cases. Consequently, the paper raises concern on the effectiveness of internationally proposed border policies that employ such population-level metrics.
All in all, Eva is a successful example of how the use of reinforcement learning and artificial intelligence in combination with real-time data can provide very useful assistance both in crisis situations but also in the public health sector.
References
Bastani, H., Drakopoulos, K., Gupta, V. et al. Efficient and targeted COVID-19 border testing via reinforcement learning. Nature (2021). https://doi.org/10.1038/s41586-021-04014-z
Nature (2021) ‘Greece used AI to curb COVID: what other nations can learn’, 22 September. Available at: https://www.nature.com/articles/d41586-021-02554-y (Accessed: 29 September 2021).
My best friend, Spotify
29
September
2021
5/5 (1)
In today’s world the majority of society utilizes social media. Being connected to a large number of people all around the globe goes hand in hand with sharing personal information about yourself. More and more people criticize the lack of privacy due to large amounts of data collected by services such as Facebook or Instagram. Nevertheless, I was recently thinking about that issue. It is not a secret that social media users are obliged to share personal information to a certain extent. However, the life users show on their platforms is often superficial. Hence, I was thinking about other services, such as Spotify, where the collection of personal information is less obvious compared to other social media services. Everyone with a passion for music knows that your taste in music can say a lot about yourself and your personality. Music can connect people, cultures but also impact someone’s sense of style and fashion. Further, the music we listen to often reflects our current state and mood. Nevertheless, people are less hesitant towards getting a Spotify description than creating a Facebook account considering privacy and data regulations. Now the question arises: Who knows me best?
In order to answer this question it is important to take a look at the techology behind Spotify. As most people know, Spotify is not only known for its wide range of music, but also its personalized features. Especially for its ‘Daily-Mix’ introduced in 2015, which is a playlist personalized to each users recent listenings and preferences.
To create this playlist, Spotify needs to extract information, by making use of three so-called recommendation models. The first model is known as collaborative filtering. You can visualize this as a huge matrix consisting of millions of vectors representing the amount of users by the amount of songs. Explained in everyday language, Spotify analyses your listening habits and matches them to similar users. Based on that the algorithm recommends you similar songs other users with a similar taste in music liked. The second model uses Natural Language Processing (NLP). In short, Spotify scans the web to look for articles, blog posts or discussions related to a specific song, artist or genre and connects them. This can also be done by scanning lyrics. To illustrate this, you can think of rap music. Many artists use similar terms. Also when speaking about rap music in ‘natural language’ users often communicate in a certain slang. By connection several dots, Spotify can detect similar music and make a connection between songs or artists. The most recent recommendation model introduced, analyses raw audio models. Spotify identified that the two previously described models put upcoming artists at a disadvantage. Hence in this last model, the raw audio is analyzed for Acousticness, Danceability, Energy, Instrumentalness, Liveness, Speechiness, Tempo, and Valence. Through that, Spotify recognizes similar songs and groups them together.
Overall, it seems like Spotify knows its users better than any other platform does. Spotify might even know more about us and our mental state than we do ourselves or want to admit to ourselves. Hence, it is questionable to distrust social media platforms but use Spotify without questioning it. Let me know in the comments how you feel about this!
Anderson, A., Maystre, L., Anderson, I., Mehrotra, R., & Lalmas, M. (2020). Algorithmic Effects on the Diversity of Consumption on Spotify. Proceedings of The Web Conference 2020. Published. https://doi.org/10.1145/3366423.3380281
Ciocca, S. (2020, April 9). How Does Spotify Know You So Well? – Featured Stories. Medium. https://medium.com/s/story/spotifys-discover-weekly-how-machine-learning-finds-your-new-music-19a41ab76efe
Spotify. (2021). Web API Reference | Spotify for Developers. https://developer.spotify.com/documentation/web-api/reference/
Tiffany, K. (2018, February 5). You can now play with Spotify’s recommendation algorithm in your browser. The Verge. https://www.theverge.com/tldr/2018/2/5/16974194/spotify-recommendation-algorithm-playlist-hack-nelson
Data Analytics in Team Sports; Top Performance as a source of revenue!
21
September
2021
5/5 (1)
Sports industry is undoubtedly one of the biggest industries in the world. In order to put it in numbers, the sports market reached the amount of 458.8 billion in 2019 but it declined to 388.3 billion in 2020. Sports market is expected to reach a value of 599.9 billion by 2025 and 826 billion by 2030 (The Business Research Company, 2021).
As many of the other industries, the sports industry was largely affected by the COVID-19 outbreak. As mentioned also above, the sports market suffered a 15.4% decline in its value (The Business Research Company, 2021). The main reason why this happened is that the main source of revenue of almost all of the sports associations is their fans. With COVID-19 regulations, fans were not only disallowed from following their favourite teams but they were somehow distanced from the whole concept of supporting and keeping up with the team that they support. Cancellation of games, abstention from action, ban of fans in stadiums, poor athlete performance to abstention or psychological state after consecutive quarantines were some of the most outstanding issues that arose and had the result of making fans more distanced from the teams that they support.
All of the issues mentioned above, stress out the importance of having fans pleased and offer them the best quality of spectacle. Fans that are pleased with their team’s performance will tend to bring more revenue by buying sports equipment related to the team, by paying high prices for getting seats or season tickets for games, by paying to visit the stadium itself or the team’s facilities or by even buying products that are closely related to their teams. Also, most of the time, a bigger fan base will bring greater sponsorship deals to teams and associations. It is not a secret that the more consistently spectaculous a team is, the bigger fan base that it will gain throughout the years. But how can a team be always on top of their performance and attract as many fans as they can, who will eventually lead in bigger revenue levels?
The answer to the previous question is by keeping the performance of the whole team and their players as individuals to the highest levels possible. Data Analytics is a technology which has been established for many years in various industries. Sports is one of the industries that are reportedly really slow in digesting it and applying it to their processes but lately it seems there are multiple improvements at accepting it. Its adoption is in a really immature state yet but tremendous efforts of implementing and establishing it specially in the team’s sports sector are happening.
Let’s take the example of the National Basketball Association (NBA), one of the world’s most marketed sports products. Historically, in basketball there were people who were responsible for noting statistics about players in order to either scout or either improve their in-game performance through coaching. These statistics were mainly held in paper and only classical statistics such as points, attempts, assists etc could be noted by these people. The recent installment of cameras all around the court gives the opportunity for teams to keep much more detailed statistics which can be further analysed by business analytics. These analyses can later be used by machine learning models in order to design winning strategies. Strategies such as planning the most effective defensive plan for top-notch teams or players or the most efficient offensive plays according to the roster’s characteristics and capabilities are some of the examples that can provide a competitive advantage to teams who use such innovative technologies (HBS Digital Initiative, 2020).
One of the key concerns of all the teams is keeping their players well rested so that they can avoid potential injuries. Lately, teams have been selecting data from their players by offering them wearable equipment, by monitoring their sleep patterns or even collecting biological samples from them in order to track their fitness level, their rest levels and even predict their future performance. By analyzing these data, teams are trying to design the most appropriate rest strategies for each of their players. The more tired a player is, the more prone to injury he/she will be. In order to always keep them on top of their fitness so that they can contribute at most during the games, they are applying all these monitoring techniques for collecting relevant data (HBS Digital Initiative, 2020).
Last but not least, data analytics play a major role nowadays in scouting players. Back in the days, scouts would rely their decisions for players in statistics held in paper, by watching them in live action or by highlights found online. With data analytics, statistics can be analyzed in depth, thus providing a clearer and more detailed report about a player, reducing the risk of incorrect decision making. Transfers and contracts offered to players are costing a lot of money to teams and these are two of the most important expenses for a team or association, so taking right decisions in order to attract the most suitable plates for the team in order to achieve consistent performance levels is vital for them (HBS Digital Initiative, 2020).
Of course, the data models that are being analysed are far from perfect at the moment. One parameter that is really hard to take into account and that is difficult to be applied to the data models is athlete’s psychology, not only during a game but also his psychological state throughout the whole time of him/her being a member of a team. Data analysis has changed the way that teams operate but in order to take right decisions they still have to consider the human factor. What can be predicted, though, is that teams who invest and use further new technologies in order to analyse the tremendous amount of data that they can get from their athletes can be ahead of their opponents and build consistency in terms of performance so that they can always be at the top, which naturally attracts more and more fans, thus revenue, as explained in this post.
5/5 (1) On football shows such as Match of the Day, well-known pundits commonly let their sentiments be heard on the recent performances of certain players and clubs. After all, who doesn’t love to tune in on Jamie Carragher and Phil Neville arguing over Manchester United’s loss of form? While much of what is said about a player’s performance is an opinion, these accusations as well as glorifications are almost always supported by data, presented in the form of statistics. It is no surprise that data collected on a player’s total distance covered, shot conversion, and pass completion may be used to bolster these arguments, as this has been common throughout the past decade.
Recently, however, the value of data within the context of football has significantly risen, due to developments in deep learning and predictive analytics (Murray & Lacome, 2019). Adapted training sessions, player recruitment, and analysis of the opponent’s playing style are all ways in which clubs’ staff can improve their decision making by leveraging data.
Although from a fan’s perspective most of the football action takes place on game day, according to Murray and Lacome (2019), professional players train at least five days a week. Data is constantly collected on a variety of player metrics, such as running distance and number of accelerations, as well as force load distribution. Trackers that collect this data help prepare the intensity of certain drills. Analyzing the force load distribution, for example, allows coaches to examine which of a player’s muscle groups are weak, and therefore critical decisions can be made leading up to the day of the match.
Furthermore, data collected on a team and its opponents have proven to provide valuable insights. According to Burn-Murdoch (2018), football’s “analytics era” began in 2006, when London-based Opta Sports recorded the time and location of every pass, shot, tackle, and dribble. Today, about 2,000 data points are collected per match (Burn-Murdoch, 2018). This development in data collection has progressed to the point where Premier League shows such as Match of the Day now present viewers with the number of goals they can expect that weekend.
However, arguably the most impressive development in data-driven football, has come from sports scientists that have developed algorithms that predict the likeliness of certain in-game player decisions (Burn-Murdoch, 2018). As shown by the depiction below, machine learning programs are now able to determine player movements and the amount of space a player consequently creates by their positioning on the pitch. This technique, referred to as “ghosting”, has as a result uncovered an otherwise difficult-to-uncover aspect of a player’s skill set, namely creating space, which is an invaluable asset when considering buying a player.
Considering the impact data analytics has already had in the football world within the last decade, who knows which new technological developments will occur in the near future and how they will shape the way decisions are made!
References:
Murray, E. and Lacome, M., 2019. What Difference Can Data Make To A Football Team?. [online] Exasol. Available at: <https://www.exasol.com/en/what-difference-can-data-make-for-a-football-team/> [Accessed 5 October 2020].
Burn-Murdoch, J., 2018. How Data Analysis Helps Football Clubs Make Better Signings. [online] Financial Times. Available at: <https://www.ft.com/content/84aa8b5e-c1a9-11e8-84cd-9e601db069b8> [Accessed 5 October 2020].
Data for Good
16
September
2019
No ratings yet.
Data Analytics and Business Intelligence have changed the way many businesses operate worldwide. Actionable insights and data-driven decision-making have created a new trend in the corporate sector. Studies predict that by 2025, 60% of the 163 zettabytes of existing data will be created and managed by enterprise organizations (Reinsel, Gantz and Rydning, 2018). Furthermore, according to McKinsey(Bokman et al., 2014), data-driven organizations are 23 times more likely to acquire customers, six times as likely to retain customers, and 19 times as likely to be profitable as a result.
How Big Data will turn Insurance Fraud into an issue of the past
8
October
2017
Losses to fraud in property-casualty are huge: It is estimated that 10% of industry losses ($32 billion) are attributed to fraud and the problem is getting worse with 61% of insurers reporting an increase in the number of suspected frauds (Insurance Networking, 2016). In the past, insurance claims were delegated to claims agents who had to rely on a limited amount of information and on their intuition to solve those cases. However with the appearance of big data analytics new tools became available and are now changing the field of fraud detection drastically.
Tower Watson reported that 26% of insurers used predictive analytics to combat fraud in early 2016. This number is expected to rise to 70% in 2018, a bigger increase than in any other big data application (Insurance Networking, 2016).
Insurance companies possess a large amount of data about their customers, may it be through the claim’s documents or social media accounts available online. By leveraging technologies such as text mining, sentiment analysis, content categorization and social network analysis data is collected, labelled and finally stored for further analysis (Infosys, 2017). Predictive analytics can then generate an alert when a certain claim appears fraudulent. Subsequently a claims agent will check the suspicious claim more precisely and finally decide the final measures to be taken. Finally, frauds that are identified are added to the systems data pool which further strengthens future analytics results.
In the next years insurers with sophisticated data analytics abilities will outperform their peers as they can offer a better customer service through faster claim handling and lower prices due to reduced costs. Insurers like AXA are already heavily investing in this technology (AXA, 2017), however it remains to be seen which companies will assert themselves in this changing environment. The customer will profit from these innovations as well. Better and more precise claims handling means customers will have their claims accepted faster and will not have to deal with too bureaucratic processes anymore.
However utilizing social media profiles will raise moral and legal questions about privacy and user self-determination in regard to their data. Insurance companies have to watch out to not loose their customers trust.
No ratings yet.When it is cold outside, you will most likely put on a warm winter coat. And when the sun starts shining, the ice cream sales skyrocket. This may seem evident, but connecting sales data to weather information can be very insightful. The weather influences our purchasing behaviour much more than we could even imagine. Do you know how? Let’s find out.
Purchase channel One obvious influence the weather has, is the channel we use for our purchases. On sunny days, bricks-and-mortar stores enjoy more footfall, while online turnover increases on rainy days. However, the influence of rain is industry dependent [1]. On rainy days there was a 12% increase in website traffic for retailers in the home & furniture, wholesale, and clothing verticals. However, there was no significant difference for big box retailers.
This is interesting, unfortunately the studies do not reveal the underlying reason. I think it has to do with different products reacting differently to weather change. As big box retailers have a big variety of products, the fluctuations are more likely to cancel each other out.
Mood change Temperature, air pressure, humidity, snow fall and sun all have an impact on people’s mood. These moods reflect into different purchasing behaviours. On sunny days, people tend to be more opportunistic in their purchases. For example, more cars are sold on hotter days. On the other hand, during hurricane seasons, people book more holiday to resorts in exotic destinations.
This is not only shown in economics. Different moods, caused by the weather, also change your chances in romance! The French psychologist Nicolas Gueguen initiated an experiment in which an attractive male approached unaccompanied young, asking their telephone number [2]. “I just want to say that I think you’re really pretty,” he cooed. “I’ll phone you later and we can have a drink together someplace.” Antoine achieved an impressive success rate of 22% on sunny days but only 14% when it was cloudy.
Conclusion Obviously, fluctuations of the weather are influencing shopping. However, the influences reach further than I expected myself. Another eye-opener is how easily we can connect all these kinds of situational factors’ data to already existing data, to find new insights. Do you have any suggestions for different situational factors, to elaborate on?
5/5 (4) “Data is the lifeblood of the digital economy, it can give insight, inform decisions and deepen relationships”
The Big Data Movement has gained quite some attention this year as being one of the greater e-commerce trends of 2016. And rightly so, considering firms that employ strategies including big data enjoy an average increase of 60% in business margins. But then why haven’t all firms started incorporating big data practices? Or better yet, why do some firms that employ big data strategies not experience the benefits from it? But let’s start from scratch.
The Big Data market is expected to grow to $187 billion by 2019, an increase of approximately $65 billion in just 5 years. Yet what has caused this increase? The digitization of business activities has enabled firms to record all types of information, that can then be structured to determine what data will be valuable to further analyze and act upon. Although one may associate the popularity of big data to the advancements in our technological environment, its success results from the higher availability of information. The more data there is to be analyzed, the more potential patterns can be identified – ultimately creating cost reduction advantages, better decision-making opportunities and new products and services to meet the consumer’s exact needs.
Yet, despite having employed data management practices, why do some firms fail to experience the aforementioned benefits? Studies have shown that many of these lack a proper business data strategy with the necessary skills and technology needed. Additionally, some firms do have access to large data sets, but do not know how to use this data to gain value from it. They regard big data as a simple business activity, as opposed to making it a part of the company culture – the firms that have effective practices are those that view big data as a valuable resource of the firm in its entirety and not just one of the IT department.
So how should a firm start making the most of its data? It should formulate a data strategy, tailored to meet the company’s goals, that is based on three elements: data, analytical models and tools. Firstly, a company should assemble, integrate and structure its data. Although this may initially be a long process, having all its information together will help advanced analytical models to detect unexpected patterns, potentially creating a competitive advantage for the company. The last essential element is tools – to translate the data outcomes into language that managers and employees can understand. It does not matter if a company has analyzed and found patterns in data – if the resulting model is unclear to employees, they will not act upon it. The data management process has then been practically useless and the firm in its entirety doesn’t benefit from big data.
What are your thoughts on big data? Why do you think some firms aren’t able to fully reap the benefits of the big data movement?
References
Gutierrez, D. (2016). What Does the Future Hold for Big Data Analytics?. [online] Inside Big Data. Available at: http://insidebigdata.com/2016/10/22/what-does-the-future-hold-for-big-data-analytics/ [Accessed 21 Oct. 2016].
Li, T. (2016). Session 2: Industry Disruption, online course materials, Semester 1, 2016, Erasmus University.
McAfee, A. and Brynjolfsson, E. (2012). Big Data: The Management Revolution. [online] Harvard Business Review. Available at: https://hbr.org/2012/10/big-data-the-management-revolution [Accessed 21 Oct. 2016].
McKinsey & Company. (2013). Big data: What’s your plan?. [online] Available at: http://www.mckinsey.com/business-functions/digital-mckinsey/our-insights/big-data-whats-your-plan [Accessed 21 Oct. 2016].
White, S. (2016). Study reveals that most companies are failing at big data. [online] CIO. Available at: http://www.cio.com/article/3003538/big-data/study-reveals-that-most-companies-are-failing-at-big-data.html [Accessed 21 Oct. 2016].