AI-based hiring tools: HireVue

9

October

2021

No ratings yet.

As this course draws to a close, I want to use this opportunity to continue my series on AI-based hiring tools and inform you of one other AI-based hiring tool that you may encounter in the near future: HireVue. If you haven’t already, please refer to my first blogpost about Pymetrics for more context on this series.

Figure 1. Accenture recruitment process (Via Accenture)

Assuming you have passed the online assessment stage (Pymetrics) in Figure 1, you will now advance to the digital (on-demand) interview stage. Most companies that follow this same structure, will be using the HireVue platform to conduct this interview.

What is HireVue?

HireVue is a software company that provides video interviews for the company you applied to. In this blog we will be focusing on their on-demand interview. In their on-demand interview, candidates are asked pre-determined questions (by the company) using the HireVue platform. Candidates record their answers to these questions, which are consequently submitted to the company.

Here is where it gets interesting: your recorded answers are not evaluated by a human recruiter, but by AI. HireVue uses voice and facial recognition software in order to analyze your answers and assign you a score. This score is then used to rank you amongst other applicants. More concretely, you are ranked based on your facial expressions, eye contact and movements, body language, tone, and keywords in your recorded answers.

How does the on-demand interview look like?

Figure 2. Example of a question on the HireVue interview platform (Via Cultivatedculture)

Figure 2 depicts a candidate that is in the midst of recording an answer to one of the pre-determined questions. Prior to a question appearing, a candidate will have 30 seconds to think about what he/she wants to say. The company you are applying to can decide whether they allow for retries (in case you mess up your answer). If they do, the number of attempts is usually communicated in the beginning of the interview (during my own personal experience with HireVue I had three attempts to answer a question).

Hirevue vs. Pymetrics

HireVue was introduced in the recruitment process for reasons similar to Pymetrics: to leverage (faster) AI-driven predictions that allow for increased diversity and mitigated bias when selecting applicants.

However, in contrast to Pymetrics, HireVue’s algorithm has faced large waves of public criticism. AI researchers have frequently voiced their concerns, claiming HireVue’s technology is ‘profoundly disturbing’. The criticism reached its boiling point late 2019, when prominent rights group Electronic Privacy Information Center (EPIC) filed an official complaint with the Federal Trade Commission, urging them to investigate HireVue and its business practices. In response, HireVue announced in January 2021 that it would stop relying on facial recognition to assess their job candidates. However, they will continue to analyze other biometric data, including speech, intonation and behaviour.

Voice your opinion!

Having read the above, where do you stand in regard to HireVue? Would you welcome such an interview? Especially considering HireVue interviews may be used at the very company you have your sights set on (Accenture, JP Morgan, Goldman Sachs, Morgan Stanley, etc.).

References

Did this blog pique your interest? Please refer to the used sources below for more in-depth coverage on this topic:

  • https://www.accenture.com/th-en/careers/local/recruiting-in-the-new
  • https://www.hirevue.com/blog/hiring/video-interviewing-guide
  • https://www.topinterview.com/interview-advice/what-is-a-hirevue-interview
  • https://searchhrsoftware.techtarget.com/definition/HireVue
  • https://cultivatedculture.com/video-interviews
  • https://www.washingtonpost.com/technology/2019/10/22/ai-hiring-face-scanning-algorithm-increasingly-decides-whether-you-deserve-job
  • https://epic.org/2021/01/hirevue-facing-ftc-complaint-f.html

Please rate this

Deepfake Fraud – The Other Side of Artificial Intelligence

8

October

2021

Dangers of AI: How deepfakes through Artificial Intelligence could be used for fraud, scams and cybercrime.

No ratings yet.

Together with Machine Learning, Artificial Intelligence (or: AI) can be considered one of if not the hottest emerging innovations in the field of technology nowadays (Duggal, 2021). AI entails the ability of a computer or a machine to ‘think by itself’, as it strives to mimic human intelligence instead of simply executing actions it was programmed to carry out. By using algorithms and historical data, AI utilizes Machine Learning in order to comprehend patterns and how to respond to certain actions, thus creating ‘a mind of its own’ (Andersen, n.d.). 

History

Even though the initial days of Artificial Intelligence research date back to the late 1950s, the technology has just recently been introduced to the general mass on a wider scale. The science behind the technology is complex, however AI is becoming more widely known and used on a day-to-day basis. This is due to the fact that computers have become much faster and data (for the AI to derive from) has become more accessible (Kaplan & Haenlein, 2020). This allows for AI to be more effective, to the point where it has already been implemented in every-day devices i.e. our smartphones. Do you use speech or facial recognition for unlocking your phone? Do you use Siri, Alexa or Google Assistant? Ever felt like advertisements on social media resonate a bit too much with your actual interests? Whether you believe it or not, it is highly likely that both you and I come into contact with AI on a daily basis.

AI in a nutshell: How it connects to Machine/Deep Learning

That’s good… right?

Although the possibilities for positively exploiting AI seem endless, one of the more recent events which shocked the world about the dangers of AI is a phenomenon called ‘deepfaking’. This is where AI utilizes a Deep Learning algorithm to replace a person from a photo/video with someone else, creating seemingly (!) authentic and real visuals of that person. As one can imagine, this results in situations where people seem to be doing things through media, which in reality they have not. Although people fear the usage of this deepfake technology against celebrities or high-status individuals, this can – and actually does – happen to regular people, possibly you and I.

Cybercrime

Just last month, scammers from all over the world are reported to have been creatively using this cybercrime ‘technique’ in order to commit fraud against, scam or blackmail ordinary people (Pashaeva, 2021). From posing as a wealthy bank owner to extract money from investors, to blackmailing people with videos of them seemingly engaging in a sexual act… as mentioned before, the possibilities for exploiting AI seem endless. Deepfakes are just another perfect illustration of this fact. I simply hope that, in time, the positives of AI outweigh the negatives. I would love to hear your perspective on this matter.

Discussion: Deepfake singularity

For example, would you believe this was actually Morgan Freeman if you did not know about Artificial Intelligence and deepfakes? What do you think this technology could cause in the long term, when the AI develops itself into a much more believable state? Will we be able to always spot the fakes? What do you think this could lead to in terms of possible scamming or blackmailing, if e.g. Morgan Freeman were to say other things…?

References

Duggal, N. (2021). Top 9 New Technology Trends for 2021. Available at: https://www.simplilearn.com/top-technology-trends-and-jobs-article

Andersen, I. (n.d.). What Is AI and How Does It Work? Available at: https://www.revlocal.com/resources/library/blog/what-is-ai-and-how-does-it-work

Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1). https://doi.org/10.1016/j.bushor.2019.09.003

Pashaeva, Y. (2021). Scammers Are Using Deepfake Videos Now. Available at: https://slate.com/technology/2021/09/deepfake-video-scams.html

Please rate this

Author: Roël van der Valk

MSc Business Information Management student at RSM Erasmus University - Student number: 483426 TA BM01BIM Information Strategy 2022

AI-based hiring tools: Pymetrics

19

September

2021

No ratings yet.

By this time next year, most of you will have graduated from the master’s programme and made yourselves available on the job market. In this blog I want to shed some light on a gamified assessment called pymetrics games that you may possibly encounter during the assessment stage of a job application process.

For those of you who are not familiar with assessments or applying for jobs in general, the traditional job application process (as I have experienced it) generally looks something like Figure 1.

Figure 1. Traditional job application process (via Sierrasoln). Note: steps may vary depending on the type of job or sector.

The traditional (online) assessment stage will generally consist of:

  • ability tests measuring your performance when it comes deductive, numerical and logical reasoning;
  • personality test (questionnaire)

This is where Pymetrics comes in, a company that specialises in developing gamified assessments for recruitment purposes. Companies have opted to fully replace their aforementioned assessment stage with Pymetrics’ patented pymetric games. Some of the most notable companies being: Boston Consulting Group (BCG), JP Morgan, Accenture and Unilever.

What do these pymetric games entail?

The pymetric games are Pymetrics’ core product. It is an online gamified assessment in which candidates have to play through a series of 12 minigames that take two to three minutes each. The assessment uses neuroscience and AI in order to assess a broad range of 91 different cognitive traits. An example of one of the minigames is depicted in Figure 2.

Figure 2. Balloon minigame in which candidates can earn money with every balloon pump. Pumping too much will cause the balloon to explode and make you lose all your money for that respective balloon.

How does it work (in a nutshell)?

Pymetrics creates a custom algorithm for a company by having at least 50 top performers of said company play the pymetric games. Subsequently, this model is used as a benchmark when evaluating applicants’ results. Pymetrics markets its algorithm as entirely bias free, having succesfully subjected the algorithm to extensive AI audits in order to prove their claim.

So what’s the catch?

As my fellow peers Andrew Tan and Tamas Vincze have already explained in great detail: algorithms are inherently biased. In addition, an independent AI audit of Pymetrics’ algorithm found that although it passed the formal checks, the audit itself did not prove that the tool is bias free nor that it actually picks the most qualified candidates for a job.

This brings me to my question: how do you feel about an AI-based hiring assessment being put into practice? Would you much rather prefer the traditional online assessment? Having personally experienced both types of assessment, I am curious to see where my fellow peers stand, especially as you prepare yourselves for your job search.

References

Did this blog pique your interest? Please refer to the used sources below for more in-depth coverage on this topic:

  • https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/
  • https://digital.hbs.edu/platform-digit/submission/pymetrics-using-neuroscience-ai-to-change-the-age-old-hiring-process/
  • https://www.graduatesfirst.com/pymetrics
  • https://hackingthecaseinterview.thinkific.com/pages/bcg-pymetrics-test
  • https://www.jobtestprep.com/pymetrics-games#balloon-game
  • https://sierrasoln.com/hiring-process/


Please rate this

AI-enabled China’s Social Credit System: in-depth analysis

5

October

2020

5/5 (1)

Automation has transformed every aspect of modern individuals’ lives. Trivial tasks that used to take a person hours to complete, can now be performed within a matter of seconds due to technological advancements. Artificial Intelligence (AI) is one such advancement of technology that is paving the way for the prevalence of automation in every industry. The ability of AI to perform tasks autonomously is primarily possible due to its ability to be able to process large amounts of data and infer patterns and conclusions within this data, thus effectively learning tasks by itself. However, the procedures used by the AI to analyze the data are initially inputted by an administrator in the form of algorithms and statistical models. An algorithm is essentially a set of rules and the process to be followed by the machine/computer to perform a calculation/action. Modern automation stripped to its core, is a collection of algorithms and related statistical models programmed by an administrator. Due to the increased adoption of the internet, algorithms have become integrated into every aspect of our lives.

The financial credit system used in many western countries can be seen as an example of how algorithms govern our lives. The system involves gathering financial data relevant to an individual from multiple sources, followed by an algorithm that analyses the likelihood of an individual defaulting on a loan. The data gathered primarily consists of previous debts taken, payment deductibles not met and other forms of credit taken up by the individual in the past. After the careful analysis of this data, the algorithm calculates a score for the individual, the credit score. This score is then used by banks, insurance companies, and other financial institutions to determine the creditworthiness of the individual when he/she requests their services (Petrasic & Saul, 2017). In China, such a system exists not only to determine a citizen’s financial credit score, but it expands to all aspects of a citizen’s life by judging citizens’ behavior and trustworthiness, known as the Social Credit System, introduced in 2014. The Social Credit System will have a complete database on all Chinese citizens by 2020, which will be collected from a variety of sources. This scale of data collection is possible in China as Baidu, Alibaba and Tencent are the major providers of internet infrastructure in the country; they work closely with the Chinese Communist Party (Kobie, 2019). The majority of the digital footprint left by Chinese citizens is on infrastructure established by these companies thereby making it easy for the Chinese Communist Party to access its citizens’ data. This sharing of data between private companies and the government is not commonly heard of in China’s western counterparts and shows the importance of data protection laws enforced in those countries. The implementation of the Social Credit System has numerous effects on the country and citizens on economic and social levels.

On an economic level, the algorithms that facilitate the Social Credit System help bridge a major institutional gap that is the underdeveloped financial credit system in China. As mentioned earlier, the financial credit system utilizes algorithms to calculate a credit score to determine the creditworthiness of individuals. Such credit checks can make it more difficult or even deny individuals to access credits. Often, these credit checks focus on only certain aspects such as the timely manner in which we pay our debts (Petrasic & Saul, 2017). This is simply not enough to determine the creditworthiness of individuals as there are other factors at play as to why individuals pay their debts over a certain time period as they do. The commercial credit systems such as the Sesame Credit (developed by Ant Financial Services Group) can therefore be seen as more valuable in determining the creditworthiness of individuals. The Sesame credit score is arguably a better predictor of trustworthiness, as the scores take a broad range of important factors into account. This will prove to be very beneficial for the financial institutions as they will have the highest level of guarantee that the credit extended will be in safe hands. At the same time though, the citizen with a low rating will not be eligible for large loans and will be asked to pay a very high interest rate. Thus, effectively positioning the algorithm behind the Social Credit System as the decisive entity on whether a citizen can be eligible for a loan or not. The argumentation behind the decision to allow an algorithm to govern the credit eligibility of the citizens states that, due to the restrictions placed on the citizen with a lower score, it would motivate them to be better citizens thus achieving a better score. However, citizens with a lower social credit score than a certain threshold may be subject to more restrictions. For example, citizens with low social credit scores are restricted access to certain services such as (quality) education or (quality) transportation. On a social level, the Social Credit System may give rise to social segregation, where citizens with low social credits are exempted from social activities as well as leading to reduced interactions between citizens with higher social credits and those with lower social credits. Moreover, on the work floor, people with low social credit scores may fail to get a promotion because of their scores. The combined effect of restricted access to education, social segregation as well as limited career prospects, can lead to the next generation of those citizens, who have low social credits, being given unfair chances to increase their social credits, and, as a result, their quality of life. Questions arise whether algorithms account for bridging the social inequality gap or if it even strengthens it (Ebadi, 2018).

References

Ebadi, B. (2018). Artificial Intelligence Could Magnify Social Inequality. Centre for International Governance Innovation. Retrieved from https://www.cigionline.org/articles/artificial-intelligence-could-magnify-social-inequality

Kobie, N. (2019). The complicated truth about China’s social credit system. Wired. Retrieved from https://www.wired.co.uk/article/china-social-credit-system-explained

Petrasic, K., & Saul, B. (2017). Algorithms and bias: What lenders need to know. White & Case. Retrieved from https://www.whitecase.com/publications/insight/algorithms-and-bias-what-lenders-ne ed-know

Please rate this

Being Human in the Age of Black Box Algorithms and Subjective Truths

17

October

2019

5/5 (4)

esther-jiao-ADv0GiMBlmI-unsplash
Photo by Esther Jiao on Unsplash

Algorithms are everywhere and play an important role in our daily lives. They decide what we see on our social media feeds, which ads are used to target us and what route we should take to get places.

The problem is that many algorithms are black boxes. They are complex systems that shape our world, whose internal workings are hidden or not easily understood (Oxford English Dictionary Online, 2011). With these algorithms, which often have a complex design, it is unclear how the output or conclusions were reached. With historically little oversight or accountability regarding their design, this problem has a profound effect on society as our day-to-day lives and our personal decisions are increasingly controlled by algorithms (Carey, 2018; Illing, 2018). Most of us have no idea what algorithms are or how exactly we are being influenced by them. And how could we if we cannot look inside, ‘under the hood’? And if we could, if sometimes even the coders that built an algorithm do not know how the system reached its conclusion (Carey, 2018), how should we?

Does this mean that we cannot trust algorithms anymore? Hannah Fry, an Associate Professor in Mathematics at University College London and author of the book “Hello World: Being Human in the Age of Algorithms”, explains in an interview with Sean Illing that our behaviour to algorithms tends to be in extremes (Illing, 2018). On the one hand, we have very high expectations of algorithms and will trust them blindly. On the other hand, as soon as we see that an algorithm or the outcomes are somewhat inaccurate, we do no longer trust them and disregard them. Fry thinks the right attitude is somewhere in the middle: “we should not blindly trust algorithms, but we also should not dismiss them altogether” (Illing, 2018).

Subjective Truths
A larger concern with algorithms is that they often contain the biases of the people who create them and that they reinforce biases and stereotypes we may inherently have, but might now be aware of (Li, 2019). As Bill and Melinda Gates (2019) describe this can even be the result of non-existent or sexist data. This is especially dangerous with black-box algorithms, which do not explain their results to their programmers – let alone to the end-users.

And what if information is deliberately misrepresented or differs depending on who you are or where you are from? Take for example Google Maps. Google claims to be objective in marking disputed regions in various parts of the world (Boorstin, 2009). Depending on from what country you access Google Maps, you will see Crimea portrayed as part of Ukraine or Russia (Usborne, 2016). If you consider that at least 124 countries are involved in a territorial dispute, there is a lot of potential for subjective truths (Galka, n.d.; Metrocosm, 2015). Another example is Apple. If you are in Hong Kong or Macau, from iOS 13.1.1 onwards you will no longer find the Taiwanese flag ?? on the emoji keyboard (Peters & Statt, 2019). Generally, as an user, you are not made aware of these intentional differences, but they do shape our perception of reality.

Conclusion
When it comes to algorithms, the people behind them or really anything in life, you should not blindly trust the information that is presented to you. Besides, as Fry argues, we should not think of algorithms themselves as either good or bad, but we should rather focus on the people behind the scenes that create these algorithms (Illing, 2018). Although algorithms may not be perfect and they often are biased, they still are extremely effective and have made our lives easier.

Whereas endings are is inevitable, the direction of technological progress is not. We have to ensure that technological progress remains aligned with human’s best interests. There might be unintended or undesired consequences, but as French philosopher Paul Virilio said:

“When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution…Every technology carries its own negativity, which is invented at the same time as technical progress.” (Virilio, Petit & Lotringer, 1999).

 

References:
Black box. (2011). In Oxford English Dictionary Online. Retrieved 12 October 2019, from https://www-oed-com.eur.idm.oclc.org/view/Entry/282116
Boorstin, B. (2009, December 4). When sources disagree: borders and place names in Google Earth and Maps. Retrieved from https://publicpolicy.googleblog.com/2009/12/when-sources-disagree-borders-and-place.html
Carey, S. (2018). How IBM is leading the fight against black box algorithms. Retrieved 16 October 2019, from https://www.computerworld.com/article/3427845/how-ibm-is-leading-the-fight-against-black-box-algorithms.html
Gates, B. & Gates, M. (2019, February 12). Our 2019 Annual Letter. Retrieved from https://www.gatesnotes.com/2019-Annual-Letter#ALChapter4
Galka, M. (n.d.). Every Disputed Territory in the World [Interactive Map]. Retrieved 16 October 2019, from http://metrocosm.com/disputed-territories-map.html
Illing, S. (2018, October 1). How algorithms are controlling your life. Retrieved from https://www.vox.com/technology/2018/10/1/17882340/how-algorithms-control-your-life-hannah-fry
Li, M. (2019, May 13). Addressing the Biases Plaguing Algorithms. Retrieved from https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms
Metrocosm. (2015, November 20). Mapping Every Disputed Territory in the World. Retrieved from http://metrocosm.com/mapping-every-disputed-territory-in-the-world/
Peters, J., & Statt, N. (2019, October 7). Apple is hiding Taiwain’s flag emoji if you’re in Hong Kong or Macau. Retrieved from https://www.theverge.com/2019/10/7/20903613/apple-hiding-taiwan-flag-emoji-hong-kong-macau-china
Usborne, S. (2016, Augustus 10). Disputed territories: where Google Maps draws the line. Retrieved from https://www.theguardian.com/technology/shortcuts/2016/aug/10/google-maps-disputed-territories-palestineishere
Virilio, P., Petit, P., & Lotringer, S. (1999). Politics of the very worst. New York: Semiotext(e).

Please rate this