Member-only story

For the past two months I’ve been scrambling to work on generative AI. That’s the phrase I prefer to corral together ChatGPT, art generators like DALL-E, and any AI-driven software which helps us make content.

Besides hosting Forum sessions (some of our best attended and viewed: 1, 2, 3) I’ve been trying to write up thoughts on two levels: what this means for higher ed now and in the short term; some bigger picture implications. Here I want to share some of the forecasts I’ve been building up on the second category.

For this post to work I am assuming a few things.

  1. Generative AI advances briskly over the next few years, improving in quality and growing in instances. It gets better at making content: text, images, audio, video, games, 3d printing, XR. There is no wild breakthrough which totally redoes the tech, nor does the technology collapse. For the latter, I’ve heard arguments that the predictive model is ultimately too flawed to be reliable. After all, some innovations do stall out or his dead ends. (I like to cite 8-track tapes here.) But for now, let’s imagine that ChatGPT et al make significant progress.
  2. Enough people perceive generative AI (“GAI” from here on) to be of at least sufficient quality to use it. That doesn’t mean everybody thinks the stuff is good enough, just that a large enough number think it is to make a difference. This also differs from my assessment of GAI’s quality, or anything like objective takes. The point concerns…

--

--

Bryan Alexander
Bryan Alexander

Written by Bryan Alexander

Futurist, speaker, writer, educator. Author of the FTTE report, UNIVERSITIES ON FIRE, and ACADEMIA NEXT. Creator of The Future Trends Forum.

No responses yet