Is facial recognition for pigs going to improve Industrial agriculture?

18

September

2020

5/5 (3)

One of the most unsustainable practices of modern civilisation is industrial agriculture. The industrial farming industry is accountable for the abuse of land, animals, and natural resources. Indeed, the animals that are slaughtered for consumption have often had a miserable life in factory farming. They are reduced to objects that represents a certain economic value and the natural needs of animals are often not met. This causes them to exhibit all kinds of stress and unnatural behaviour. 

 

Pig facial recognition: solving the problem of miserable animals

To combat the problem of miserable animals, Chinese tech companies see opportunities in pig facial recognition. This type of facial recognition works the same as human facial recognition. The only difference is that it is recording details of eyes, snouts and bristles of pigs instead of humans. Besides identifying pigs, facial recognition camera can detect the movements of pigs and can monitor if they are eating or becoming lethargic. All this information is stored in an individual file for every pig. These data files also include pigs their age, weight, breed, exercise frequency, and other indicators or its health. Next, the AI platform can then use all of this data to repeatedly keep an eye on the pigs as well as trigger an alarm if it is sceptical about the health of the animal. In this way, early diagnosis can be made.

 

Is it really possible to assess a pig’s wellbeing based on visual cues? Yes, pigs happen to be known to be extremely expressive and can communicate with each other by the use of facial expressions.  As a matter of fact, both SRUC and UWE Bristol their research have already shown that the animals can signal their intentions to other pigs using different facial expressions. There is also evidence of different expressions when they are in pain or under stress.

 

Only in the future?

Pig facial recognition is not just in the future. One of the first Chinese companies to unfold such a system is the start-up Yingzi Technology. Their system works by scanning each pig’s individual face by the use of a smartphone (see the picture below). The animals can still be identified if they are moving in a herd.  Following up, the recorded data then is investigated by a mobile application that uses deep learning algorithms. The organisation suggests that the system can match and update pigs their data profile in only a few seconds.  Moreover,  Yingzi Technology is not the only one that is trying to give pigs a better life,  currently Alibaba is utilising  “Agriculture Brain”. This is an AI platform that employs AI-supported facial and speech recognition (plus other IoT technologies) to help farmers surveil pigs.

 

Screenshot 2020-09-14 at 17.19.02

 

Not all problems are solved by pig facial recognition 

Despite these new technological innovations, it seems unlikely that it solves all the problems of industrial agriculture.  Even if the welfare of farm animals increases with this new technology, our current global food system is still responsible for one-third of global greenhouse emissions and it completely depends on fossil fuels for transportation and synthetic fertilizers and pesticides. Facial recognition for animals can give some animals a better life, but we really need to come up with something better to ameliorate industrial farming. 

 

The question is, is it even possible to significantly improve the unsustainable practices of industrial agriculture with technology? Or do they just need a completely new business model?

 

Sources:

https://spie.org/news/facial-recognition-spots-happy-pigs?SSO=

https://www.counterpointresearch.com/year-pig-heres-facial-recognition-pigs/#:~:text=Start%2Dup%20Yingzi%20Technology%2C%20one,(see%20the%20picture%20below).

https://www.onegreenplanet.org/animalsandnature/factory-farming-is-killing-the-environment/

https://www.nature.com/articles/s41598-018-35905-3

https://www.pigprogress.net/Sows/Articles/2019/3/Facial-recognition-for-detecting-pig-emotions-406445E/

 

Please rate this

How Algorithms Discriminate Against Women: The Hidden Gender Bias

9

September

2020

5/5 (2)

In past decades, AI worries have moved from whether it will take over the world, to whether it will take our jobs. Today we have a new, and justifiably serious, concern: AIs might be perpetuating or accentuating societal biases and making racist, sexist or other prejudiced decisions.

 

Machine learning technology is inherently biased

Many believe that software and algorithms that rely on data are objective. But machine learning technology is inherently biased because it works on the fundamental assumption of bias. That is, it is biasing certain input data to map them to other output data points. Of course, there is also the option to directly modify the data that is fed in through techniques like data augmentation to enable less biased data. But there is a problem; humans consciously know not to apply certain kinds of bias, yet, subconsciously they end up applying certain kinds of bias that cannot be controlled.

 

Tech-hiring platform Gild

This being the case, it is not surprising to find hidden biases all around us in the world today. For example, let’s talk about the secretive algorithms that have become increasingly involved in hiring processes. American scientist Cathy O’Neil explains how online tech-hiring platform Gild enables employers to go well beyond a job applicant’s CV, by combining through the trace they leave behind them online. This data is used to rank candidates by ‘social capital’ which is measured through how much time they spend sharing and developing code on development platforms like GitHub or Stack Overflow. 

 

This all sounds very promising, but the data Gild shifts through also reveal other patterns. For instance, according to Gild’s data, frequenting a particular Japanese manga site is a ‘solid predictor for strong coding’. Programmers who visit this site, therefore, receive higher scores. As O’Neil points out, awarding marks for this is a large problem for diversity. She suggests ‘if, like most techdom, that manga site is dominated by males and has a sexist tone, a good number of the women industry will probably avoid it’. 

 

‘Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases’

 

In the book “invisible women”, Caroline Criado Perez noted that ‘Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases’. However, if managers are not aware of how those biases operate, if they are not collecting data, and if they are taking little time to produce evidence-based processes, an organisation will continue to blindly perpetuate old injustices. Indeed, by not considering how women’s lives differ from men’s, Gild’s coders accidentally created an algorithm with a hidden data bias against women. 

 

But that is not even the worst part. The worst part is that we have no idea about how bad the problem really is. Most algorithms of this kind are kept secret and protected as proprietary code. This implies that we do not know how decisions are being made and what biases they are hiding.  Perez points out, ‘The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us’. This, therefore, is a double data gap: (1)  First,  in the knowledge of the coders designing the algorithm, and (2) second, in the knowledge of society at large, about just how discriminatory these AIs can be (Perez, 2020).

 

‘The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us’

 

We need more diversity in tech to reduce the hidden gender bias

Many argue that one easy way to combat the hidden gender bias is to increase the diversity of thought through the number of women in tech. According to the World Economic Forum, currently only 22% of AI professionals globally are female, compared to 78% who are male. Additionally, at Facebook and Google, less than 2% of technical roles are filled by black employees. To remove hidden bias in algorithms, tech companies should step up their recruiting practices and increase diversity in technical roles. 

 

Do you have any other suggestions for managers to reduce hidden bias? Or have you come across a type of hidden bias? Feel free to leave a comment.

 

References:

The Guardian (2016). How algorithms rule our working lives. Retrieved from https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives

Perez, C. C. (2020). Invisible women Data bias in a world designed for men. New York: Abrams Press. 

Forbes (2020). AI Bias Could Put Women’s Lives At Risk – A Challenge For Regulators. Retrieved from  https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/#2201ee44534f

World Economic Forum (2020). Assessing Gender Gaps in Artificial Intelligence. Retrieved from  http://reports.weforum.org/global-gender-gap-report-2018/assessing-gender-gaps-in-artificial-intelligence/

Dogtown Media (2019). Can AI’s Racial & Gender Bias Problem Be Solved? Retrieved from https://www.dogtownmedia.com/can-ais-racial-gender-bias-problem-be-solved/

Please rate this