We Asked AI Professionals, Educators, and Regulators:

1. What do you think the three greatest advantages of artificial intelligence will be in the next five years?

2. What do you think will be the five greatest risks of artificial intelligence will be in the next five years and what can be done to mitigate those risks? 

3. What should legislators be doing now to ensure AI is not abused?

4. It’s been predicted many people will lose their jobs due to the implementation of AI. What can people be doing now to prepare for the inevitable changes coming to the labor market?

5. How might AI impact education at the college level and what can educators do to detect and mitigate these aberrations?

Here’s how they responded:

Professor J. Mark Bishop (Professor Cognitive Computing (Emeritus), Goldsmiths, University of London):

1. What do you think the three greatest advantages of artificial intelligence will be in the next five years? 

a. Democratisation of automation — can use now AI to perform everyday tasks without expert software knowledge. 

b.  Pattern analys

is — can use AI to review large data sets to extract meaningful insights.

c. Can use AI in first-line computer support to provide bespoke, tailored responses to user questions.

2. What do you believe will be the biggest risks associated with AI in the next five years?


The biggest risk in using AI is to fall into the trap of believing that AI systems really are intelligent and understand the problem domain; they do not and hence, particularly with Generative AI, can all too easily spout very believable nonsense; the so-called ‘hallucinations’. It is critical that in business applications systems are thoroughly tested and evaluated by domain experts and deployed with great caution. AI systems can be hacked, and forced to output offensive messages with concomitant brand damage.

3. What should legislators be doing now to ensure AI is not abused?

Using the OECD principles would be a good start. The OECD Recommendation of the Council on Artificial Intelligence identifies five complementary values-based principles for the responsible stewardship of trustworthy AI: (1.1) inclusive growth, sustainable development and well-being, (1.2) human-centered values and fairness, (1.3) transparency and explainability, (1.4) robustness, security and safety, and (1.5) accountability.

4. It’s predicted that many people will lose their jobs due to the implementation of AI. What can people do now to prepare for the inevitable changes in the labor market?

It is not yet obvious to me that AI will presage the huge changes in employment that have long been predicted, for example by the widely cited study from Frey & Osbourne (THE FUTURE OF EMPLOYMENT: HOW SUSCEPTIBLE ARE JOBS TO COMPUTERISATOIN?); I can imagine that the nature of employment will evolve. To cope with this, as in any new technology, employees need to consider embarking on a pathway to lifelong learning.

5. How might AI impact education at the college level? What can educators do to detect and mitigate this aberration?

Students need to use AI carefully, as these systems understand nothing and can all too easily spout believable nonsense (see[2]) above. Furthermore, as Professor Luciano Floridi recently summarized in the context of of Generative AI, “LLMs are [more] like the autocomplete function of a search engine. And in their capacity for synthesis, they approach those mediocre or lazy students who, to write a short essay, use a dozen relevant references suggested by the teacher, and by taking a little here and a little there, put together an eclectic text; coherent, but without having understood much, or added anything”.” In summary, educators need to be aware of the tasks AI can do well and what they do poorly and frame their assessment accordingly.

Sek Chai, Latent AI, Co-founder

1. What do you believe the three biggest advantages of AI will be in the next 5 years?

(a) Automation – This means that a lot of work that is mundane, repetitive, and tedious would be automated, with increased accuracy and efficiency. It is not that humans can’t achieve similar performance or better, but AI would be more consistent and can run 24/7.

(b) Creative Arts – We have already seen the early impact of Generative AI creating text, images, and videos. This will continue to grow.

(c) Digital Assistance – We will learn how to best use AI tools, to help do our work faster and with less effort.

2. What do you believe the biggest risks of AI will be in the next five years?

 (a) Trust. The over exuberance and hype of AI brings real risk where we deploy the AI. For example, in areas of real consequence (e.g., safety and decision making), our lack of understanding on the capabilities and limits of AI could lead us to lose trust in all things AI.

(b) Security. This includes the use of deep fakes and misuse of AI. This covers areas in cybersecurity, etc., where AI operates faster than one can patch/fix holes in our systems to defend.

3. What should legislators be doing now to ensure AI is not abused?

Government is already setting up efforts to look at AI, in many areas, such as responsible and ethical AI. I would like to see more in focus on education (e.g., educate the population about AI in its limits and capabilities).

4. It’s predicted that many people will lose their jobs due to the implementation of AI. What can people do now to prepare for the inevitable changes coming to the labor market?

Again, back to education. We can help people learn new skills (e.g., to use a new tool called AI).

5. How might AI impact education at the college level? What can educators do to detect and mitigate this aberration? 

When I say education, I’m advocating education as early as middle school. We teach our kids proper use of the internet and we can do the same with AI. At the high school level, we need to teach students to learn about the statistical nature of AI and how bias can be introduced. At the college level, students become comfortable about what AI is and they can use it responsibly. Many schools have honor systems in grading tests/exams. Schools now allow the use of calculators as a tool. I think educators need to learn how to best test how students are learning fundamentals, with or without the use of AI.

Shalyn Drake, Lecturer, Aviation Technology, Utah State University

1. What do you believe will be the biggest advantage of AI in the next five years?

I honestly think AI in the medical field is going to have one of the largest impacts over the next five years.

2. What do you believe will be the risks of AI in the next five years?

The biggest risks are general misuse; like a generation of young students not understanding how AI needs to be used.

I think people can prepare for a change in careers by continuing their education, continuing to network, and continuing to look at opportunities. Before my position at the university, I taught secondary education. We always said we had to prepare students for jobs that didn’t exist yet.

3. What should legislators be doing now to ensure AI is not abused?

Legislators should be listening to what industry needs and using industry as a full partner on writing appropriate legislation.

At the university level, it’s important to model how to properly use AI and provide students with opportunities to use it correctly.

4. How might AI impact education at the college level? What can educators do to detect and mitigate the aberration?

​ At the university level, it’s important to model how to properly use AI and provide students with opportunities to use it correctly.

Dr. Cari Miller, Sr. Principal/Practice Lead, AI Governance & Research, The Center for Inclusive Change

1. What do you believe he three biggest advantages of AI will be in the next 5 years?

Interestingly, I think the biggest advantages may come from some surprising areas like 1) agriculture, 2) medical discoveries, and 3) environmental safety. In agriculture, we should see improvements in greater yields, production in previously inarable lands, more efficient harvesting, etc. I think we will see an explosion of breakthroughs in medical discoveries to cure all sorts of difficult and detrimental diseases. Finally, when I say, “environmental safety,” I am using that term in a very broad sense. It can mean anything from personal safety devices to commercial devices. These devices, I believe, will be able to help us do lots of things from detecting pollutants/harmful toxins nearby, preventing accidents, etc.

2. What do you believe the biggest risks associated with AI will be in the next five years?

I think that those of us that are in the AI ethics field are aware of the risks and harms that certain applications of AI pose in high-risk or critical decision situations (e.g., education, housing, employment, social benefits, etc.). Those issues, I believe, will perist. However, many of those issuesare able to be mitigated with efforts across the AI lifecycle. What I believe the biggest risks will actually be will come down to a lack of awareness of such risks and needs to mitigate risks coupled with a detrimentally slow legilative and regulatory process. This technology, unlike any other technology in our time, is moving at an innovation pace well beyond our capacity to keep pace with it, and the longer we take to traverse the learning curve and grapple with the legislative and regulatory needs, the more risk we assume… every day. 

3. What should legislators be doing now to ensure AI is not abused?

Alas, there is no silver bullet answer to this question. We are well beyond a singular approach to managing AI risks. Our legislators need to divide and conquer at this point. On one hand, we need subgroups addressing AI on a sectoral and use-case basis, and on the other hand, we need leaders addressing it on a global scale to attempt ot harmonize our approaches. AI is a boundaryless technology. Close coordination with other governments will be essential if we want to ensure that the most dangerous forms of AI are well watched and controlled.

4. It’s predicted that many people will lose their jobs due to the implementation of AI. What can people do now to prepare for the inevitable changes coming to the labor market?

This question always frustrates me. When I encounter this question, I am reminded of previous industrial revolutions. At each evolution, I am quite sure alarmists reported that jobs will be lost. However, that isn’t necessarily the case full scope of the story. The reality is that some jobs will become outdated while new jobs will emerge and training will be required.

I am here today because four generations of railroad workers in my lineage paved the way for me to thrive. In 1800, I can assure you that the men riding ponies to deliver the mail never once thought, my sons and daughters will grow up to be a conductor, a mechanic, a yardmaster, and a brakeman on the railroad. Yes, today the “mail” (which now contains all sorts of products from around the world) is delivered better, faster, safer, and more economically than the pony express ever could and millions of new jobs were created while the pony express jobs went away. Humans are resilient. It is our nature to panic a little when we don’t see the road ahead as clearly as we’d like, but history can help us understand that  it will be ok. We will figure it out one step at a time just as our forefathers did before us.

5. How might AI impact education at the college level? What can educators do to detect and mitigate this aberration?

​ Higher ed institutions are at an inflection point. Those that survive this evolutionary episode will be the institutions that figure out how to divorce themselves from their old practices and adopt processes that enable more rapid changes to occur. Staying relevant in this time requires an enormous amount of engagement in industry. This likely means that universities will need to reduce the academic course load for educators while they immerse into the AI activity to get up to speed. Additionally, there will be a need to double-down on new forms of instructional design that incorporates various forms of AI so that students are well prepared to meet the world. This may require additional training for educators as well. I would also emphasize that this not something that is only relegated to the “computer science” department. EVERY department must begin to understand AI and how it interacts with their discipline from English to Biology to Business to Psychology and everything in between. AI is sector-specific. Educators’ learning must happen in the context of each relevant domain so it can be taught in the same context.

Rob Sloan, CEO Gen AI x NeRF

1. What do you believe the three biggest advantages of AI will be in the next five years?

An efficiency of productivity boost by an order of magnitude. It’s like going from a horse-drawn carriage to a steam locomotive.

2. What do you believe the biggest risks of AI will be in the next five years?

Emotionally-reactive regulation. Many fear-mongers and doomsayers are claiming  that the end of the world, the end of labor, or the end of [insert topic] is coming due to AI. Technological change has always had disruption but never socio-economic collapse. Governments should be wise not be reactionary.

3. What should legislators be doing now to ensure AI is not abused?

Careful consideration should be made to see how current law may need *clarification* instead of wholesale change. For example: “Deepfake” content is largely illegal already, as it misrepresents an individual’s name, image, or likeness rights, (right of publicity). Greater punishments may be necessary, but the law doesn’t necessarily need any changes from the way it’s currently written.

4. It’s predicted that many people will lose their jobs due to their implementation of AI. What can people do now to prepare for the inevitable changes coming to the labor market?

There will be disruption for a number of industries, but “AI” cannot simply replace most forms of labor. A great historical parallel is in auto manufacturing. Those factories have regularly incorporated more and more autonomous robotic systems and yet the labor force is quite strong. The advent of the computer *changed* the nature ofdata entry and ultimately led to more career fields being created. Disruption will happen, but there’s no historical precedent based on technical automation for the calamity that is being pitched.

5. How might AI impact education at the college level? What can educators do to detect and mitigate this aberration?

​ There are software tools currently used at universities that “lock down” computers for students that need to take exams to prevent cheating. A similar approach may be necssary for writing essays. The point of higher education should be an emphasis on students being able to take in an apply knowledge. To an extent, using available tools instead of original/creative thought reveals the lack of academic ability.

Susan Tanner, Assistant Professor of Law, Louis D. Brandeis School of Law, University of Louisville

1. What do you believe the three biggest advantages of AI will be in the next 5 years?

A) Reducing procrastination when it comes to writing. Its ability to assist in organizing thoughts and streamlining the writing process is particularly relevant in legal education and practice.

B) Providing more equity for students and practitioners with disabilities. AI tools can offer necessary accommodations, making materials more accessible. Additionally, tools like spellcheck have already helped to level the playing field for those with language processing issues, AI is a significant step forward. 

C) Increasingly democratized knowledge. The impact of AI in this regard is akin to the revolution brought about by Wikipedia, MOOCs, and the free availability of case law and statutes online, as well as resources like the Internet Archive. Now, instead of just having knowledge available and searchable, it is more accessible through chat features. People can ask questions back and forth until they understand something.

2. What do you believe the biggest risks of AI will be in the next five years?

Misuse of AI and the propagation of false knowledge. This is a trend we’re already deeply seeing with social media. I’m doing work in what I’ve termed digital epistemic responsibility. Essentially, we all need to understand what reasons we have to believe certain sources. We’re already in trouble with false information. AI can make that worse with real sounding citations, plausible, but false.

3. What should legislators be doing now to ensure AI is not abused?

To be honest, I’m not sure this is entirely the domain of legislation. There’s a risk that legislators will act in the best interests of major corporations rather than the public. A more comprehensive approach, possibly involving ethical guidelines, consumer pressure, and industry self-regulation, in addition to legislation or court rulings, might be necessary.

4. It’s predicted that many people will lose their jobs due to the implementation of AI. What can people do now to prepare for the inevitable changes coming to the labor market?

With technology came the concentration of wealth and power, and with the implementation of AI, many people many lose their jobs. It has been so in the past with industrialization. It will be true in the future for intellectual labor. If we want everyone to be able to make a living, we should be concerned  about concentrations of wealth by those who have technology and ensure that everyone can earn a living wage. I think it’s a problem that reaches beyond AI.

5. How might AI impact education at the college level? What can educators do to detect and mitigate this aberration?

Already, students look for answers, rather than ways to learn. AI is one way that students who want to cheat themselves out of an education can short cut that system. We need to reevaluate how we assess and what we want our students to be able to do and how we will look for that demonstration. I’m incorporating more closed book (no internet) exams  into my teaching and placing more emphasis on evidence of growth and process and less emphasis on written final products.

AI Today

Post Office Box 54272, San Jose, CA, 95154, US.
© 2024 Hologram LLC. All rights reserved.

Social Links