In an article shared by the World Economic Forum, Julia Bossmann, President of the Foresight Institute writes “Optimizing logistics, detecting fraud, composing art, conducting research, providing translations: intelligent machine systems are transforming our lives for the better. As these systems become more capable, our world becomes more efficient and consequently richer.”
While this statement provides a positive outlook on the future of AI, we need to also ask ourselves: are we transferring our own human flaws to machines?
It is widely known that I have a passion for technology and artificial intelligence, which is driven by my great sense of wonder, fascination and curiosity. However, this should not be misinterpreted as a lack of concern over the potential misuse of technology. For every positive thing that technology can bring, there is also a potential negative if misused.
The majority of technology that we all of use on a daily basis are mere lines of codes, with most machines programmed to do specific tasks and follow these instructions without failure regardless of the implications of its actions.
With AI – in a nutshell – humans have programmed powerful machines to mimic cognitive functions such as “learning” and “problem-solving”. In this case, one could infer that the artificially intelligent machine would learn from its mistakes and correct it.
However, a recent New York Times article examined how artificially intelligent machines could be created with a bias ingrained into its programming. The article highlighted one particular facial recognition software that could detect the gender of a person in a photograph. For white males the software was extremely accurate, however, its error rate increased with changes in gender and race. The problem arose due to the developers primarily using images of white males in its data set, which led to the increased error rate.
In this case, I foresee consequences that go beyond the ethical and moral implications of racial discrimination.
As we move toward a more exponential future, we will increasingly see AI being used regularly in the tourism industry for customer service and various back office functions. The area that is of greatest concern for me and that presents the highest risk to our sector is when technology, not ethically programmed, is used at the point of entry of a country for immigration and security monitoring. You can imagine how these programming flaws could have severe negative consequences if this particular software was used in airports for check-in, security and immigration.
The rise and use of AI are increasing at a phenomenal rate as machines become more intuitive. At present, I am already using multiple forms of AI every day and I’m particularly fascinated by Emotional AI (EAI) with its fast-learning and human-like responses. I believe that, within our lifetime, we will see conscious machines living side by side with humans.
However, I do believe we need proper safety and security measures to be created as well as a UN-type of agency responsible for the development of a code of ethics for AI that can also monitor its development.
I am a technologist and believe that technology, if used ethically, can be a force for good. However, I am worried that we, as imperfect beings, may transfer our human flaws to these intelligent machines. The real question is how do we find the right balance in this artificially intelligent future.
Till next time,
Chief Executive Officer
Pacific Asia Travel Association