I recently attended Future Led: Superintelligent AI – Social Saviour or world threat? The event was the second I have attended in the Future Led series of events being run at Liquid Interactive, a digital experience agency that we partner with. Executive Creative Director at Liquid Interactive, Andrew Duval moderated and despite the depth and breadth of topics to cover helped to keep the conversation informative and engaging within an admittedly tight turnaround of an hour.
The panel was a wealth of both theoretical and applied knowledge, with experience ranging from the practical application and development of AI to the ethical implications. Nick Therkelsen-Terry is the CEO at Max Kelsen, an engineering agency with a focus on AI and ML; Sue Keay, who is the CEO of the Queensland AI Hub and chairs the Board of Robotics Australia Group; Dr Evan Shellshear, Head of Analytics at Biarri, which develops mathematical modelling and predictive modelling solutions; and Justine Lacey, who is the Director of the Responsible Innovation Future Science Platform at CSIRO.
I have to say, I may have originally felt a little intimidated by the experience of the panel, but the speakers quickly absorbed me in their discussions on how the development of superintelligent AI could impact all of our lives; the potential for both positive and negative impacts, as well as what we as a society and as developers of technology should be considering as we move forward.
One of the aspects of the discussion I found particularly interesting was when the panellists raised the question “What is creativity?”. If we program AI that can build upon existing data, even improve upon it, at what point are the outputs of the AI deemed “creative”? While specific definitions of creativity exist and were discussed, what was interesting was that while the definitions themselves were not contentious, how we might interpret them if produced by an artificially intelligent system is not resolved. It was asked by Dr Evan Shellshear how we might differentiate between the artists of the past whose work was not appreciated by the critics of the time and an Artificial Intelligence that was creative in a way we were not yet ready to understand.
The panellist’s conversation around creativity stemmed from questioning how we define intelligence. Existing AIs operate exceptionally well in specialised areas in which they have been trained. But it is that question of how to take machines to the level of intelligence that we have previously only considered to be capable of in humanity (and beyond): is it possible to develop machines that not only build on previous data, but can make intuitive connections and seek out new paths. A phrase often used when talking about software and technology is ‘garbage in, garbage out’. It is based on the idea that a computer can only operate based on the code, or instructions we give it. Therefore, if we develop intelligent machines that operate based on the information that we provide, that machine can only make decisions based on the message of that data. There have been some abysmal examples of Artificial Intelligence solutions developed that not only perpetuate societal and cultural biases of the humans that designed them, but also in some cases worsen their impact. The panellists discussed that not only is AI a tool which we have to develop mindful of our own inherent biases, but it also has the potential to be used as a tool to help us to identify and understand those biases. This could then be a part of a conscious step towards a society that identifies, acknowledges and addresses when our subconscious bias is contributing to unfair outcomes for minorities.
Given that we imagine using a future superintelligent AI to solve the problems that as yet remain unsolved, we can’t be certain if the solutions to those extraordinarily complex problems will emerge in quite the form we imagine – Dr Justine Lacey raised the question of whether artificial intelligence might not take such a different approach to problem-solving that its solutions might not actually provide the solutions we expect. While the panellists were primarily optimistic about how we might use an artificial superintelligent machine to improve our society, it seems that we need to look at both the practicalities and technology required; while also considering the theoretical implications of what it might mean for us as humans. Nick Therkelsen-Terry in particular spoke strongly about the importance of investment and research in this area for Australia. I think given the amazing potential and opportunity to be had in the area, let alone as an industry, this is something that most of us could agree on.
While there is still a way to go before the advancements of AI and ML are truly ‘superintelligent’, there are still so many problems that Artificial Intelligence and Machine Learning are already helping us solve. Just a couple of months ago, our Data Science Unit Lead, Chris Roach, shared a blog about the exciting results we had with the protype fish-identification product that we developed in partnership with the Global Wetlands team at Griffith University. This project was part of the “Counting Fish” challenge that was put forward by the Australian Institute of Marine Science (AIMS) to address the problem of highly manual fish identification work that currently requires significant man hours and resources for the collection of marine data. The prototype showed us we can successfully reduce the requirement on human resources to collect this important data, allowing us to contribute to research and decision making in the marine sciences much more effectively and efficiently.
If you have a problem that needs solving, and would like to discuss how Gaia Resources could help you solve it, please feel free to get in touch with myself or our Data Science Unit Lead Chris Roach. Alternatively, hit us up on Twitter, LinkedIn or Facebook.
Sophie
Comments are closed.