Microsoft joins the Open Innovation Network

20

October

2018

No ratings yet.

Early last week brought the world the surprising announcement that Microsoft would be joining the Open Invention Network (OIN), a community of some 2,650 companies worldwide that has agreed to cross-license their respective patents for all other network members.
Historically Microsoft has not been a friend to the open source community, especially concerning the rival operating system Linux. Microsoft’s conversion did not happen out of the blue, although it did happen fast.
Erich Andersen, Corporate Vice President of Microsoft, describes the move as “the next logical step for a company that is listening to customers and developers”. In the last couple of years, with the reorientation towards platform-based cloud-service, Microsoft has been cozying up to the open source community, needing to be on good standing in order to attract them to their platform. In fact, Microsoft is bringing their entire portfolio of roughly 60,000 active patents to the table with 30,000 more pending. making the OIN that much more attractive for potential future members. This is a big and promising step towards the ideal of a cooperative business environment aimed at innovating together instead of obstructing each other with legal battles. Hopefully the traction the open source community has been gaining will help cultivate a similar culture outside of the tech realm as well in the broader business world.

https://azure.microsoft.com/en-us/blog/microsoft-joins-open-invention-network-to-help-protect-linux-and-open-source/
https://www.zdnet.com/article/microsoft-open-sources-its-entire-patent-portfolio/

Please rate this

Machine-Bias vs Human Bias

6

October

2018

No ratings yet.

An often-cited concern with implementing AI decision-making is the “machine bias” learned from the datasets that were used to build the model.
Proposed solutions include multiple data sources with increased external validity and cleaning datasets. This will all help to improve AI models for decision-making, however doesn’t address the core of the issue.
The dangerous bias does not stem from a single human or from inaccurate data entry, it is the categorical bias that exists in our society as a whole. As such they are no more and no less dangerous, problematic and difficult to detect than bias in human decisions.
The implementation of AI happens to coincide with our heightened awareness of racial, gender, and other biases in humans and AI. It is right and necessary for us to be aware of their existence and to take action to battle them. It is however not a reason to discount machine decision-making. No, AI is not perfectly “neutral”, “fair”, or “objective”, but they are still MORE neutral, fair, and objective than individual humans. We cannot yet completely get rid of all the bias that goes into making models, but every bit counts. Every correction, every conscious mitigation improve the AI model and give it an advantage over the humans. Even without any correction the model is no more biased than the humans it learns from and gets rid of the short term individual fluctuations stemming from hunger, tiredness, moods and movies we recently watched.
Taking as example the sobering case researched by ProPublica which found that
“COMPAS, a machine learning algorithm used to determine criminal defendants’ likelihood to recommit crimes, was biased in how it made predictions. The algorithm is used by judges in over a dozen states to make decisions on pre-trial conditions, and sometimes, in actual sentencing.”
We are rightfully horrified and shocked at this. But people often are too eager to say we should not be using AI to make such important decisions. What we forget is, that the algorithm is only used “sometimes” in actual sentencing, whereas the human judgment that is “always” used formed the biased dataset in the first place and will not be any fairer. Having identified bias in the machine model, steps can be taken to adjust it. It might not be easy, but it is probably still much easier than re-educating every human judge, police officer, news reporter and worried mother teaching their children to stay out of the gentrified neighborhoods.

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://becominghuman.ai/how-to-prevent-bias-in-machine-learning-fbd9adf1198

Please rate this