Will you be designing your own baby? The impact of AI and DNA alterations on the future of the human race.

8

October

2020

5/5 (2)

 

In 2003, an almost 15-year long study with a whopping cost of $2.7 billion dollars, named the Human Genome Project, provided us with the genetic blueprint of a human being. In this study the human genome was studied, which is the overall set of deoxyribonucleic acid (DNA) in our body. DNA is made of the iconic twisting, paired strands. Made of four chemical units, known as the nucleotide bases: Adenine (A), Thymine (T), Guanine (G) and Cytosine (C). Located in pairs on opposite ends. Within the nucleus of our cells 23 pairs of chromosomes exist, encapsulating approximately 3 billion pairs of the paired strands. Working together, all of these pairs are the building pieces that determine us: how we look, how we act and how we feel.

DNA

 

Even though the meaning of every DNA pair or group of DNA pairs has not yet been discovered, a lot of information has already been acquired such as the genetic code of diseases like Alzheimer’s, Sickle Cell anemia, blindness, AIDS, muscular dystrophy etc. Moreover, we also already understand some genetic code that serves as building blocks in common physical attributes such as: eye colour, hair colour and even proneness to sweat!

Industry appearance and growth

Since the completion of the human genome project, the price of sequencing your own DNA has drastically fallen with current costs for a complete individual genomic picture falling under $1000 Dollars.

This drastic fall in price has given space to a whole new industry, with companies like MyHeritage, that provide test kits for a mere €49 euros to determine your biological heritage, sharing information on what areas of the world (such as Europe, Asia, North/South America, Africa, etc) your former ancestors were from. Even though the overall truth and effectiveness of these tests is still under scrutiny by many, it already portrays the start of a new industry emerging.

On a more serious note, there are also more practical applications were the screening and data compilation of the human genome has proven to provide a lot of value such as; carrier testing, for the chances of genetic diseases for offspring. Prenatal testing, to find out genetic or chromosomal disorders. Forensic testing, for crime scenes, predictive testing, to detect future disposition to diseases such as Alzheimer’s. And lastly, preimplantation testing for in-vitro fertilization, to test the genetic code of fertilized eggs.

COST_human_genome

Cost per genome data – 2020

The value of the human genome, and uncovering its many secrets exponentially rose when in 2012, a breakthrough led by Jennifer Doudna and Emmanuelle Charpentier Showed how an enzyme called CAS-9 could be used to cut, edit or add genomic data into our DNA.

This discovery was made by researching the antivirus defence in bacteria, whom, when attacked by a virus, would create a defence mechanism building the DNA of the virus into its own genome. With this, scientists were able to learn how genome patterns work as well and what they mean, these patterns are also known as “Clustered regularly interspaced short palindromic repeats”, also known as CRISPR.

Sequencing enough genomes and doing enough tests in order to figure out the exact use of fragments of genetic code is very data heavy. As mentioned before, each human being is composed of about 3 billion pairs of genomic material. Nevertheless, with recent advancements in big data computing and AI, deciphering and altering the code of life has never been this close. Because of this, the genetic testing market has been growing exponentially with a CAGR of 11,5% annually. Many countries are actively investing in this new technology, with places such as the UK aiming to fully sequence the genome of 5 million Britons, the US aiming to sequence over 1 million US citizens and China being most aggressive, aiming to sequence 50% of new-Borns by 2020.

Innovative disruption

The cost of editing and studying the human genome has drastically fallen. This has opened up the scene to biohackers, a group of people, without medical qualifications (in most cases) that decide to take CRISPR advancements into their own hands. As can be seen in the Netflix documentary series Unnatural Selection, people can buy human genome editing toolkits from $60 to about $1000 (Available in the US but illegal in the Netherlands) where they tinker with the human genome and in extreme cases auto experiment with themselves such as the case with Tristan Roberts, an HIV-positive man who self-injects himself with an experimental gene therapy that is yet unproven in its efficiency. Or Josiah Zayner, a famous biohacker infamously known for injecting himself with self-enhancing gene therapy in a convention with the aim of increasing his strength.

CRISPR_Testkit

CRISPR Cas9 genetic engineering kit – $150

The emergence of these biohackers both bring positive effects to disruptions, such as the emergence of crowd-sourced study groups to bring much-needed biomedical advances at a low cost (e.g. cheaper alternatives to penicillin). Yet, simultaneously, playing with the genetic code of humans and animals under no ethical standards and supervision can cause irreparable damage and discredit the industry and bring official studies to a slowdown with stronger regulations.

Human DNA alteration, the emergence of Designer babies

CRISPR has proven potential to remove heritable diseases from the human genome by making selective cuts in fertilized eggs. With the growing efficiency of in-vitro fertilization procedures, pre-implantation genetic testing becomes more and more feasible. Through these tests, we will be able to deduce a lot of characteristics about the genetics of the person. As portrayed in the Netflix documentary of human nature, making selective cuts, additions and changes into the genomic code could mean the disappearance of genetic conditions and diseases such as sickle cell disease, Crohn’s Disease, Down Syndrome, Alzheimer’s, AIDS etc.

This might seem far away, yet an infamous experiment by a groups of Chinese scientists in Shenzhen in 2018, who implanted gene-edited embryos that were made to be resistant to AIDS, shows that these applications are right at our doorstep. The woman implanted gave birth to twins who were resistant to AIDS, yet the scientist team was given a 2-year prison sentence and a 1 million yuan fine. This unethical experiment shook the scientific community to its core and strengthened the international rulings on CRISPR altered human embryo gene-alteration experimentation.

designerbabies

The removal of diseases is not the only thing scientists and companies are interested in. With increased knowledge on the function of different genes, we are approaching a reality where gene-editing in babies for desired physical and mental attributes becomes more and more of a possibility. Indeed, a close connection to the 1997 classic sci-fi noir film, Gattaca, where humans were able to define every single aspect of their child, creating new bridges between the wealthy and the poor, where money was not only the only difference between the classes. Currently, (without the use of CRISPR) by using human genome identification in embryos, a fertility clinic in California, USA, allows parents to choose the eye and hair colour of their child by comparing different fertilized embryos.

 

More complex attributes such as strength, intelligence and creativity are not decoded, due to different sets of genes having part in this and the effect of epigenetics (genes turning or off due to environmental effects over time). Nevertheless, fast and impactful advances in AI and large databases with human genome data will provide us with deeper insight on the building our building blocks and what exact changes to make to achieve our desired results. This will open a world of possibilities in the alteration of the human genome for the years to come, yet many have posed the question to what extent it is up to us to have control in this. Would it be ethical to genetically engineer our offspring? Should these changes be made to future generations as well? How would pricing for such a disruptive innovation work?

There is also a movement for a moratorium (worldwide prohibition or freeze) on clinically using germline editing technology on humans. Considering its large benefits, it is hard to assess whether this technology will bring more good than bad. With fears of it only being available to the rich or of it negatively impacting the genetic code of the human race down the line. Nevertheless, global competition and lack of trust make this a not very likely scenario.

What is your opinion? Should genetic changes be passed down generations? Should all diseases be removed? Would you change your own genetic code if it were a possibility? What would you change?

 

 

 

 

 

 

References:

https://www.genome.gov/human-genome-project/Completion-FAQ#:~:text=In%201990%2C%20Congress%20established%20funding,billion%20in%20FY%201991%20dollars.

https://www.labiotech.eu/crispr/crispr-technology-cure-disease/

https://medlineplus.gov/genetics/understanding/basics/dna/

https://www.sciencefocus.com/science/who-really-discovered-crispr-emmanuelle-charpentier-and-jennifer-doudna-or-the-broad-institute/

https://medlineplus.gov/genetics/understanding/testing/uses/

https://www.myheritage.nl/dna?utm_source=ppc_google&utm_medium=cpc&utm_campaign=mh_search_nl_nl_des_mhdna_exact_ancestry&utm_content=424747730784&utm_term=ancestry%20dna%20test&tr_camp_id=9594091424&tr_ad_group=ancestry_dna_test&tr_ag_id=101842820871&tr_placement=&tr_device=c&tr_account=558-761-1525&keyword=&tr_size=&gclid=EAIaIQobChMIjufSlLej7AIVzQJ7Ch04SQMSEAAYASAAEgKGRPD_BwE

https://apnews.com/press-release/pr-wiredrelease/5c6893c18d5c79e1d8aaf5e13a7dc86c

https://singularityhub.com/2018/11/14/designer-babies-and-their-babies-where-ai-and-genomics-could-take-us/

https://www.the-odin.com/diy-crispr-kit/

https://www.sciencemag.org/news/2019/12/chinese-scientist-who-produced-genetically-altered-babies-sentenced-3-years-jail

https://nerdist.com/article/20-year-anniversary-gattaca-genetics/

https://theconversation.com/experts-call-for-halt-to-crispr-editing-that-allows-gene-changes-to-pass-on-to-children-113463

https://www.netflix.com/nl-en/title/81220944#:~:text=2019PG%201h%2034mDocumentary,modification%20research%20known%20as%20CRISPR.

https://www.netflix.com/nl/title/80208910

Please rate this

On the edge of something new

7

October

2020

No ratings yet.

We are entering an era where new two technologies for computing are becoming more and more crucial. The two computing types, quantum and edge, will have a crucial impact on computing power and will increase the processing abilities enormously.

I already quickly mentioned quantum computing in my other article about the DARQ technologies (see here), and in this article I want to dive deeper into what quantum computing is, its benefits, and its differences to edge computing, since the two are sometimes seen as similar, which they aren’t.

 

The most important about quantum computing
While the explanation of quantum computing and its functionalities can fill books, I try to put it short and point out the most important. Basically, quantum computers are able to solve problems that ‘traditional’ computers cannot solve, mainly because the ‘traditional computers’ can only process information displayed through 1s or 0s. The ability of quantum computers to solve more difficult problems is derived from the ability for the 1s and 0s to exist in two states (qubits) at once, making the bytes able to hold four values at the same time: 00, 01, 10, and 11. That way, a quantum computer can perform computations in parallel, crucially increasing their computing power and therefore their efficiency in comparison to ‘traditional’ computers.

However, to perform those actions, quantum computers require special algorithms, which have yet to be defined. Scientists have been researching for ages with yet to find a way to define usable algorithms to make quantum computing usable in large scale.

 

The most important about edge computing
With the constant development and improvement of technologies like XR, autonomous vehicles, or IoT, the demand for instant calculations and minimal latency of data exchange is increasing. Most technologies do not ‘have time’ to wait for their requests to travel across networks, reach a computing core, be processed, and then be send back. The computing is required to be performed either closer to the device, or ideally, within it, in order to reduce latency.

To meet this need, edge computing is on the rise. The idea of edge computing is, to perform the computations either near or right at the source of data, reducing the latency that cloud-computing cannot avoid, by running fewer processes in the cloud. However, edge computing is not there to replace cloud-computing but rather to work alongside with it. A clear division of computations that need immediate feedback along with processes that can withstand a certain latency will drastically increase the speed and efficiency of processes*.

 

Why are the technologies crucial?
Both technologies have a direct impact on several other technological advances, like the DARQ technologies mentioned in my other article, increasing speed, efficiency and security, but also technologies used in the healthcare or automotive industry for example.

The necessity and potential of both computational technologies can be seen in the increased research efforts by big companies like Google, Amazon or Verizon. In 2019, Google set a new benchmark for computational speed with a new kind of processor, and Verzion/Amazon introduced a 5G edge cloud computing partnership, to launch IoT devices and applications the edge.

 

With the constant increase in the collection of data and the requests being computed by processors, the need for technological advances is there. Both of the technologies create ample opportunities within the industries to succeed and drive innovation and change. However, as usual the big tech companies are at the forefront of exploring and developing those technologies.

 

What’s your pick?
Will smaller companies be able to shape and use the technologies soon or do they need to wait until bigger companies will make them available in large scale?

 

 

_____________________________________________

*Please note: When we talk about ‘immediate’ feedback to computational requests, the differences between edge computing and cloud computing are within microseconds. However, this difference could become crucial in several situations, as for example in the avoidance of traffic accidents through autonomous vehicles, which is why it is mentioned at this point.

 

Sources
https://futuretodayinstitute.com/trend/quantum-and-edge/
https://www.keyinfo.com/ai-quantum-computing-and-other-trends/
https://www.upgrad.com/blog/trending-technologies-in-2020/

Please rate this

Black Mirror: The Devil’s Advocate of Future Technology.

7

October

2020

5/5 (4)

Every millennial has experienced the time-consuming and exhausting moments wherein you have to explain to your (grand)parents how to send a proper WhatsApp message, use google maps, or what to and not to post on social media. The technological innovation cycle has shortened at such a pace that even we as millennials struggle to cope with the newest developments. As de benefits of these developments are very clear, the fast innovating tech-industry might cause a blind spot for some of the dark sides on these developments. The Netflix series Black Mirror plays the devil’s advocate towards future technological developments, and some examples of episodes even show their power of prediction.

Dating apps are completely hot these days and should help us to find “the one”. Based on our preferences, algorithms can find people who might be the best fit. According to the people you like or not like, machine learning will become better in finding the perfect match for you. In Black Mirror’s S4E4, dating apps are taken to another level. When two individuals are matched, the system brings them together and gives their relationship a due date. After that due date, they both move on to the next relation up till the perfect match is found. In the meantime, the system analyses behavior and uses machine learning to better predict the perfect match. However, the dark side of the story is in the fact that whenever your relationship has a due date, people start to behave differently and the system decides whether you can stay together. Imagine yourself falling in love but a system is not sure about the match and forces you to leave. What does love even mean then?

Another perfect example of the dark side of technology is to be seen in S3E1. This episode illustrates a social media system wherein everyone can rate (scale 1 to 5) each other based on their interaction. Your average score decides the way of living. The idea is that high average ratings give you perks in your daily life. This would stimulate people to improve their life by working out, being kind, and reach their personal goals. However, this also works the other way around. If you interact with very low-rated people or act inappropriately to people, it will be hard to get access to places where high-rated people live together. This episode might seem a bit over the top and unrealistic, but guess what?: we are already living this life.

Take Instagram as an example. The number of likes, comments, and followers influences the lives of people on a daily basis. It is no coincidence that whenever there is a conversation about someone, people first ask to show them their Instagram profile. Social media has become a platform of justice and impressions, causing people to pretend to have a great lifestyle while all they do is editing photos and interact with people online.

Besides the part of self-awareness and judgment of social media, rating systems are already a fact of society. A couple of years ago, some places in China started to work with social credit systems. Starting with a certain amount of credits, people could gain or lose credits based on their behavior monitored by public cameras. Yes, the cameras have face recognition and record every step and move the inhabitants make. Losing credits could be a result of public scandals like polluting public areas, but could also derive from not visiting your family enough or hanging out with people who are having low credits themselves. Having a low amount of credits can exclude you from buying airplane tickets or usage of public transport. In the region of Xinjiang, this system has already led to increased surveillance and discrimination towards the geographical minority of Uighur people.

It is time for us to wake up and become aware that the fast-evolving tech industry does not only give us benefits, but also has some really dark sides. Is this a new field of expertise? Should we be educated on the dangers of technology? Who is responsible for writing ethical codes or laws? It is for humanity to decide whether we use technology as a tool or that we are falling short in our expectations.


Sources

https://www.theguardian.com/commentisfree/2020/may/11/black-mirror-episode-dystopian-tv

https://www.imdb.com/title/tt2085059/episodes?season=4&ref_=ttep_ep_sn_pv https://theconversation.com/black-mirror-the-dark-side-of-technology-118298

https://www.nytimes.com/2018/12/28/arts/television/black-mirror-netflix-interactive.html

https://www.wired.co.uk/article/china-social-credit-system-explained

https://www.nytimes.com/2019/05/22/world/asia/china-surveillance-xinjiang.html

https://aisel.aisnet.org/ecis2019_rip/33/

https://journals.sagepub.com/doi/full/10.1177/1461444815604133

Please rate this

BIM, Meet Gertrude!

6

October

2020

Gertrude enjoying a well deserved drink during her performance. 

In August 2020, famous tech entrepreneur Elon Musk revealed his latest technological project: a pig called Gertrude. On first sight, Gertrude looks like an ordinary Pig. She seems healthy, curious, and eager to taste some delicious snacks. When looking at her, it is hard to imagine how she managed to get one of the world’s most radical and well known tech entrepreneurs so excited. Gertrude just seems normal.

This is exactly the point!

ElonMuskGotcha

Elon Musk “Gotcha”

Gertrude is no ordinary pig. She has been surgically implanted with a brain-monitoring chip, Link V0.9, created by one of Elon Musk’s latest start-ups named Neuralink.

Neuralink was founded in 2016, by Elon Musk and several neuroscientists. The short term goal of the company is to create devices to treat serious brain diseases and overcome damaged nervous systems. Our brain is made up of 86 billion neurons: nerve cells which send and receive information through electrical signals. According to Neuralink, your brain is like electric wiring. Rather than having neurons send electrical signals, these signals could be send and received by a wireless Neuralink chip.

To simplify: Link is a Fitbit in your skull with tiny wires

The presentation in August was intended to display that the current version of the Link chip works and has no visible side-effects for its user. The user, in this case Gertrude, behaves and acts like she would without it. The chip is designed to be planted directly into the brain by a surgical robot. Getting a Link would be a same day surgery which could take less than an hour. This creates opportunities for Neuralink to go to the next stage: the first human implantation. Elon Musk expressed that the company is preparing for this step, which will take place after further safety testing and receiving the required approvals.

The long term goal of the Neuralink is even more ambitious: human enhancement through merging the human brain with AI. The system could help people store memories, or download their mind into robotic bodies. An almost science-fictional idea, fuelled by Elon Musk’s fear of Artificial Intelligence (AI). Already in 2014, Musk called AI “the biggest existential threat to humanity”. He fears, that with the current development rate, AI will soon reach the singularity: the point where AI has reached intelligence levels substantially greater than that of the human brain and technological growth has become uncontrollable and irreversible, causing unforeseeable effects to human civilization. Hollywood has given us examples of this with The Matrix and Terminator. With the strategy of “if you cannot beat them, join them”, Elon Musk sees the innovation done by Neuralink as an answer to this (hypothetical) catastrophical point in time. By allowing human brains to merge with AI, Elon Musk wants to vastly increase the capabilities of humankind and prevent human extinction.

Singularity
Man versus Machine

So, will we all soon have Link like chips in our brains while we await the AI-apocalypse?

Probably not. Currently, the Link V0.9 only covers data collected from a small number of neurons in a coin size part of the cortex. With regards to Gertrude, Neuralink’s pig whom we met earlier in this article, this means being able to wirelessly monitor her brain activity in a part of the brain linked to the nerves in her snout. When Gertrude’s snout is touched, the Neuralink system can registers the neural spikes produced by the neurons firing electronical signals. However, in contrast: major human functions typically involve millions of neurons from different parts of the brain. To make the device capable of helping patients with brain diseases or damaged nervous system, it will need to become capable of collecting larger quantities of data from multiple different areas in the brain.

On top of that, brain research has not yet achieved a complete understanding of the human brain. There are many functions and connections that are not yet understood. It appears that the ambitions of both Elon Musk and Neuralink are ahead of current scientific understanding.

So, what next?

Neuralink has received a Breakthrough Device Designation from the US Food and Drug Administration (FDA), the organisation that regulates the quality of medical products. This means Neuralink has the opportunity to interact with FDA’s experts during the premarket development phase and opens the opportunity towards human testing. The first clinical trials will be done on a small group of patients with severe spinal cord injuries, to see if they can regain motor functions through thoughts alone. For now a medical goal with potentially life changing outcomes, while we wait for science to catch up with Elon Musk’s ambitions.

 Neuralink-Logo

Thank you for reading. Did this article spark your interest?
For more information, I recommend you to check out Neuralink’s website https://neuralink.com/

Curious how Gertrude is doing?
Neuralink often posts updates on their Instagram page https://www.instagram.com/neura.link/?hl=en

Want to read more BIM-articles like this?
Check out relating articles created by other BIM-students in 2020:

Sources used for this article:

4.88/5 (8)

Please rate this

Europe and the 5G Challenge

22

September

2020

In September 2020, the European Round Table for Industry published a report on the EU-27’s advancements in 5G technologies. This article briefly explains the findings of this report and the causes behind such results.

5/5 (1)

With the competition for the development of 5G networks increasing every day, companies all around the world have been playing a tense chess game for the leadership of this game-changing technology. However, as the chairman of the European Round Table for Industry (ERT), Carl-Henric Svanberg, said in an interview with the Financial times, it seems that Europe is left far behind in this race for 5G technology, with an approach that could probably result in a great failure driving investments down.

 

On September 18th 2020, the RTE published a report in which the 27 Member States of the European Union and their advancements in both 5G and 4G were analysed and assessed. This report identified a gap between the European Union and other powerful economies throughout the globe. For instance, it points out how both the US and South Korea have 5G commercial services available since a year ago, South Korea counting with 1,500 base stations per million capita; whereas the majority of Member States have not even launched 5G commercial services and, in total, they have only ten 5G stations deployed per million capita.

 

The contrast between these economies’ progression in 5G networks can be in great part explained by the diversity of countries within the European Union and the differences among them. In the European Union, Member States are characterised by their own particular political and economic situation as well as the political and economic situation which groups the European Union as a single economic power. Therefore, it is hard to coordinate the diversity of high and inconsistent costs, and returns on investment throughout the various States.

 

Despite Europe’s potential in the digital innovation spectrum which drives the emergence of various start-up hubs such as Amsterdam, Berlin and Lisbon; the region seems to be left behind in the roll-out of 5G networks. A key factor hampering this progress is spectrum availability and spectrum licensing. With many European telecoms allocated in narrower bandwidth and spectrum licensing being specially costly for some particular countries, the roll-out of 5G faces a complicated and uncertain environment which derives in several restrictions on innovation, investment, and network deployment.

 

Moreover, while China’s technological and networking company, Huawei, progresses in their development of 5G networks, the US Government moves quickly to stop the internationalisation of their advancements. This has driven European economies into a further state of confusion and blockage. Outside the European Union, the United Kingdom has sided with the US and in July 2020 it banned new Huawei, resulting in both a delay by two to three years of the 5G phone networks rollout, and an increase of cost by £2bn. This example draws a clearer image on the potentially self-sabotaging and slow advancements of Europe as a whole.

 

All factors combined result in the current slow evolution of 5G networks in Europe compared to the advancements of other powerful economies such as China, South Korea, and the US. It is now crucial for the European Union to think about strategies to overcome the obstacles it faces both internally and externally to avoid further economic turmoil and boost its own technological strengths for the development of 5G, avoiding

 

References

ERT, 2020. Assessment of 5G Deployment Status in Europe. Available at: https://ert.eu/wp-content/uploads/2020/09/ERT-Assessment-of-5G-Deployment-Status-in-Europe_September-2020.pdf [Accessed September 22, 2020].

Lemstra, W., 2018. Leadership with 5G in Europe: Two contrasting images of the future, with policy and regulatory implications.

Please rate this

Your Profile Is Being Scraped

18

September

2020

4.33/5 (3)

Facial recognition is gaining interest the last few years, all around the internet and also on this forum, more and more is being written about facial recognition itself, the positive and negative effects and the underlying technologies. Major companies are competing on developing better algorithms and are selling their developed technologies as cloud services. Easy API’s make it possible for every tech savvy person to use those services within minutes. But still the subject of facial recognition is still a lot of theory and less action. Current news items often discussed a few local tests or the implementation of video tracking within law enforcements. The major steps made on facial recognition are made within China, were facial identification or payment becomes more mainstream. But over the last year one company’s name popped up several times, gaining interest of several tech journalist, Clearview AI.

A lot of people nowadays have a certain social media profile, often with a public name, profile picture and some basic information. Of course it would be possible to go to every page and collect user information randomly, but no one every took the time to do this or saw the benefits of doing this, expect the startup Clearview AI.

Scraping is the act of automatically extracting public data of the internet. Every website can be scraped, even all data and texts from this blog for example. Clearview AI, performed these scraping operations on a huge level, they started scraping all the public profiles of Facebook and saved this data in one big database. If your profile picture and name are public on one of your social media accounts, which are probably most of the profiles, it is likely that these are included in the database of Clearview AI.

Would not every law enforcement agency be interested in the possibility of finding a suspect with the help of a few clicks? Robbers, fraudsters or cyber bullies are also people, most of the time with a personal social media account. This is exactly what Clearview AI thought while developing their business model, by scraping all public available data, training huge neural networks and selling it worldwide all bundled in a good looking application to law enforcement agencies. According to a graph of the New York Times, this will bring the number of photos the FBI can search from their own database of 411 million photos to a staggering number of 3 billion photos that are included in the Clearview AI application, all supported by an impressive artifical intelligence model.

This brings up some important questions, do we support facial recognition as a way of law enforcement? Is it legal to scrape information from social networks? Does making your profile public also implies that you give permission for your data to be saved and used for AI training purposes?

Next to the negative sides of web scraping, there are also interesting possibilities of using these methods. You could for example scrape this blog and analyze the word usage or identify trends and topics of interest over time. Web scraping also enables new innovations that aggregate data from multiple sources in creative ways creating information that was not available before.

The New York Times has an article going more into depth in the background of Clearview AI. Click here to read the full article or listen to accompanying podcast if your interested.

I would love to hear your opinion about the subject of web scraping and the usage of facial recognition. If you like to have a more technical background on how to implement web scraping techniques please let me know in the comments.

 

Sources

Hill, K. (2020, January 18). The Secretive Company That Might End Privacy as We Know It. The New York Times. https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Matsakis, L. (2020, January 27). Scraping the Web Is a Powerful Tool. Clearview AI Abused It. Wired. https://www.wired.com/story/clearview-ai-scraping-web/

 

 

Please rate this

Professors! Get online or get out!

16

October

2019

5/5 (1)

Please rate this

As a BIM master student, I was quite surprised when I heard that none of the courses were recorded and therefore available online. Everyone I ever spoke about it was enthusiastic about recorded lectures. Maybe all of my friends are just lazy students (like me), who prefer to stay in bed rather than going to a 9 am lecture, but I genuinly think it offers more convenience than it has disadvantages. Me wondering this was the main reason for me to write on this subject.


MOOC stands for Massive Open Online Courses, and are (often free) courses that are available to the public through online lectures and assignments (EdX, 2019). It provides great advantages as you can enroll from anywhere around the world, as long as you have access to a decent internet connection.

First of all, and maybe the most obvious advantage of MOOC’s, it that the internet knows no borders. Of course we all know the Great Chinese Firewall, but someone from South-Korea is able to enter a website from a Colombian local bee farm. Therefore, people from more abandoned areas, like sub-Saharan Africa are able to enter these courses as long as there is a decent internet connection and a streaming device. According to UNESCO (2016), sub-Saharan Africa has the highest rates of education exclusion in the world. Almost 60% of all youth between 15 and 17 there are not in school. Yes, they still require a streaming device, but a phone screen is in theory enough, and video projectors can be installed in classrooms.

This brings us to another advantage of MOOC’s, there is (in theory) no maximum student capacity. As it is a digital product, it can in theory be copied infinitely without reducing in quality. This means an enormous amount of people could follow the course of a single professor. This seems like a situation that only has benefits, but there are some risks. If a single professor is enough to educate a massive group of people, then I foresee a decrease of the need for professors. This may lead to many professors losing their job, and having to seek other ways to earn a living.

MOOC’s being a digital good also brings a major risk, the risk of the course content being copied and spread without consent and compensation. Screens can be recorded and assignments being copied. Websites like The PirateBay that provide a lot of illegal content are nowadays still available, whether it is through a proxy server or not). A solution must be sought to prevent piracy, because a single pirate is enough to create a lot of damage.

 

Another advantage of MOOC’s is that it provides an opportunity to gather data about its students. It can be tracked how much and when students spend time on the website, and which classes and courses are more and less attractive. Students may be able to provide a rating and a comment after every course. A risk of having too many students enrolled, is that a single professor may not be able to answer all questions or analyze feedback. This proves that a MOOC is not simply a professor with a webcam, but really requires a well-structured team or organization.

I would advise professors and universities to brainstorm about threats and opportunities in the increasingly digitized society. I believe that it’s very important not to miss the boat and to exploit first-mover advantages. Otherwise, you will remain the incumbent, while others become the disruptors.

 

References

EdX. (2019). mooc.org. Retrieved October 16, 2019, from http://mooc.org/.

UNESCO. (2016). Education in Africa. Retrieved October 16, 2019, from http://uis.unesco.org/en/topic/education-africa.

 

How did porn shape the digital space?

16

October

2019

No ratings yet.

Welcome to the weird side of the internet again! That’s what you get for enabling auto-play…

In this piece I’d like to highlight just how much of our modern digital world owes its existence to porn. Why? Because it’s both funny and true. So to really figure out what porn has done for digital progress, we need to start at the beginning:

What have they done so far?

Long ago, in a little unknown country called the United States of America, the internet was born. This internet was unwieldy, slow, confusing and in desperate need of life support. The adult industry is one of the biggest reasons why the early internet retained enough users for its continued development by keeping people hooked and coming back for obvious reasons. They were the ones to pioneer streaming, pop-up ads, online transactions, tracking devices and are one of the reasons e-commerce is so large in the current day. They even were the driver for increasing the bandwidth of the internet to facilitate more porn, and all the other services benefited (Benes, 2013: https://www.businessinsider.com/how-porn-drives-innovation-in-tech-2013-7?international=true&r=US&IR=T).

What are they doing now?

The adult industry’s influence on innovation is less prominent nowadays due to its nature as the first to capitalize on technologies and trends. They do, however still contribute (Gross, 2010:http://edition.cnn.com/2010/TECH/04/23/porn.technology/index.html)! Deep fakes, AI and haptic feedback aren’t innovations made by the porn industry specifically, but industry is the one driving their practical application far earlier than others.

  • Deep fakes are already being used to fake celebreties for porn but the industry is advancing the tech regardless, and getting better all the time.
  • AI is being tested to create an interactive porn experience which will likely be translated to other applications if succesful.
  • Haptic feedback is being used to create sex toys that can accurately simulate sex between long distance partners or a porn video, but has multiple applications in interface and product design.

If someday Siri and Alexa become sentient, you can thank porn for that!

Please rate this

5/5 (2) The Threat of Deepfakes

12

October

2019

Please rate this

Last summer an app called DeepNude caused a lot of controversy in the (social) media. Deepnude was an AI based piece of software with the ability to create a very realistic nude pictures of any uploaded face in the app. Mass criticism followed, the app’s servers got overloaded by curious people and not much later, the app went offline permanently. Deepnude stated on twitter that the probability is misuse was too high and that the world “was not ready yet”. The app never came back online ever since  (Deepnude Twitter, 2019). It shows that deepfake technology is becoming available to the public sooner than we thought, including all potential risks.

A definition for DeepFake is “AI-based technology used to produce or alter video content so that it presents something that didn’t, in fact, occur” (Rouse, 2019). As deepfake is AI-based technology it is able to improve over time, as the amount of data input increases and the technology learns to how to create better output. In my opinion deepfake has an amazing potential in the entertainment industry, but there is a serious risk when the technology gets misused. The AI technology makes it harder and harder for humans to distinguish real videos from fake ones. Deepfake videos of world-leaders like Trump and Putin are already to be found on the internet. Also deepfake porn videos of celebrities are being discovered once in a while.

With the upcoming presidential elections of 2020 in the United States, politicians and and many others are seeking solutions to prevent a similar scenario like the 2017 elections. The 2017 presidential elections were characterized by the spread of fake news and the ongoing allegations resulting from it. These events very likely influenced the outcome of those elections (CNN, 2019). Recently the state of California passed a law which “criminalizes the creation and distribution of video content (as well as still images and audio) that are faked to pass off as genuine footage of politicians. (Winder, 2019).” In 2020 we’ll find out whether deepfakes have been restricted succesfully.

I hope developers and users of deepfake technology will become aware of the huge threats of deepfake, and will use it in a responsible way. It is also important for society to stay critical at their news sources and that they prevent supporting these types of technology misuse. According to Wired (Knight, 2019), Google has released thousands of deepfake videos to be used as AI input to detect other deepfake videos. Another company called Deeptrace is using deep learning and AI in order to detect and monitor deepfake videos (Deeptrace, sd).

See you in 2020…

References

CNN. (2019). 2016 Presidential Election Investigation Fast Facts. Retrieved from CNN: https://edition.cnn.com/2017/10/12/us/2016-presidential-election-investigation-fast-facts/index.html

Deepnude Twitter. (2019). deepnudeapp Twitter. Retrieved from Twitter: https://twitter.com/deepnudeapp

Deeptrace. (n.d.). About Deeptrace. Retrieved from Deeptrace: https://deeptracelabs.com/about/

Knight, W. (2019). Even the AI Behind Deepfakes Can’t Save Us From Being Duped. Retrieved from Wired: https://www.wired.com/story/ai-deepfakes-cant-save-us-duped/

Rouse, M. (2019). What is deepfake (deep fake AI). Retrieved from TechTarget: https://whatis.techtarget.com/definition/deepfake

Winder, D. (2019). Forget Fake News, Deepfake Videos Are Really All About Non-Consensual Porn. Retrieved from Forbes: https://www.forbes.com/sites/daveywinder/2019/10/08/forget-2020-election-fake-news-deepfake-videos-are-all-about-the-porn/#26a929963f99

 

 

Tracking what you watch

19

September

2019

5/5 (1)

With current technologies evolving fast, more data is generated. However, data can be generated via channels you did not expect. In this article, I would like to tell you about one of the unknown ways that organisations are already tracking people. The technology that I would like to discuss is called ‘eye-tracking’.

 

What is eye-tracking?

Eye-tracking is the use of an advanced camera that measures our eye movements. It is the process of measuring where one is looking or the motion of the eye relative to the head. An eye-tracking camera is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human-computer interaction, and in product design.

 Image result for eye tracking

Why is eye-tracking on the rise?

One of the reasons that eye-tracking is on the rise, is because the cameras used for this technology are getting cheaper. There are eye-trackers available for 199 dollars. A few years ago, this was still a few thousand dollars. With lower prices, the technology becomes more accessible for use.

Data can be used by data analysts for user insights. Especially in marketing, this can have a big impact. Because companies are always looking for new ways to gather insights about their customers, it is interesting for them to experiment with eye-tracking. However, to really get a grasp of what eye-tracking can do, let us look at some use cases.

 

Use cases

I personally believe eye-tracking can revolutionizing our relationship with mobile devices. In the future, it will be possible to control your mobile device and surf the internet with your eyes. Another example where this might come in handy is healthcare. During operations, surgeons can not touch a computer due to hygiene. With this technology, this would be possible.

Gathering insights can also be revolutionized. Where websites now use Hotjar to create heatmaps of websites based on where you click, these heatmaps can now evolve to heatmaps based on information of your eye movement.

Eye-tracking can be used in mobility, to make sure drivers keep their eyes on the road. This can prevent accidents by alarming drivers to fall asleep behind the wheel.

 

With all these use cases, cheaper eye-trackers and evolving technology, I am curious to see what eye-tracking will bring in the future. How about you?

 

Sources

“Basics |      Eyetribe-Docs.” Theeyetribe.Com, 2014, theeyetribe.com/dev.theeyetribe.com/dev.theeyetribe.com/general/index.html. Accessed 19 Sept. 2019.

Farnsworth, Bryn. “What Is Eye Tracking and How Does It Work? – IMotions.” IMotions, 2 Apr. 2019, imotions.com/blog/eye-tracking-work/.

“Tobii Tech – What Is Eye Tracking?” Tobii.Com, 17 Sept. 2015, www.tobii.com/tech/technology/what-is-eye-tracking/. Accessed 19 Sept. 2019.

“What Is Eye Tracking?” Eyetracking.Com, 2011, www.eyetracking.com/About-Us/What-Is-Eye-Tracking. Accessed 19 Sept. 2019.

“What Is Eye Tracking? How Is Eye Tracking Valuable in Research?” Tobiipro.Com, 6 Mar. 2018, www.tobiipro.com/blog/what-is-eye-tracking/.

Wikipedia Contributors. “Eye Tracking.” Wikipedia, Wikimedia Foundation, 21 Aug. 2019, en.wikipedia.org/wiki/Eye_tracking. Accessed 19 Sept. 2019.

Please rate this