Member-only story

Some large-scale decisions we can make about AI in 2023

Bryan Alexander
5 min readMay 23, 2023

In my current work on forecasting the intersection of AI and higher ed, I’ve been running into an interesting problem. Well, several, but today I’d like to share a structural one, caught between futures thinking and where AI is right now.

We’re on the verge of several major decision points about how we use and respond to artificial intelligence, particularly due to the explosion of large language models (LLMs). These occur at large-scale and also radical levels. We could easily head in different directions for each, which makes me think of branching paths or decision gates. As a result the possibilities have ramified.

To better think through this emerging garden of forking paths, I decided to identify the big ones, then tried to map different ways through them. I also mapped them out in a flow chart, which yielded some surprising results.

Dataset size. Right now LLMs train on enormous datasets. This is a problem for multiple reasons: a large carbon footprint, reducing access to the handful of people owning or working at capital-intensive enterprises (OpenAI, Google, etc). For some LLM applications, the bigger the training set, the better. On the other hand, there have been some developments with AI software which get good results from smaller sets, and OpenAI’s leader stated that big datasets are now a thing of the past. So in which direction will we take AI, towards building and using bigger or smaller source collections?

--

--

Bryan Alexander
Bryan Alexander

Written by Bryan Alexander

Futurist, speaker, writer, educator. Author of the FTTE report, UNIVERSITIES ON FIRE, and ACADEMIA NEXT. Creator of The Future Trends Forum.

No responses yet