Transformative and Generative AI: Key Differences with Examples
LLMs (large language models) are more than just tools for churning out original content. They're transformative technologies designed to enhance, refine, and elevate existing information. When we lean on LLMs solely for generative purposes—just to create something from scratch—we're missing out on their true potential and, arguably, using them wrong.
- Academic Content Transformation: Suppose I want to take a dense academic article on a complex topic, like Bloom's Taxonomy in AI, and rework it into a simplified summary. In this case, I'd provide the model with the full article or key sections and ask it to transform the information into simpler language or a more digestible format.
- Format Transformation: If I write a detailed article, I can have the model transform it into a social media post, a podcast script, or even a video outline. It's not generating new information but rather reshaping the existing data to suit different formats and audiences.
- Transformative in Research & UX: In my UX research work, I often use LLMs to transform qualitative data into structured insights. For example, I might give it raw interview transcripts and ask it to distill common themes or insights. This task leverages the model's ability to analyze and reformat existing information.
- Creative Content: If I'm working on a creative project—say, writing a poem or a TTRPG campaign—I might ask the model to generate new content based on broad guidelines. This is a generative task because I'm not giving the model specific data to transform; I'm just prompting it to create from scratch.
- Brainstorming: For brainstorming purposes, like generating hypotheses or possible UX solutions, I let the model take a looser prompt (e.g., 'Suggest improvements for an onboarding flow') and freely generate ideas.
To illustrate both approaches in a single task—let's say I need an essay on the origins of Halloween. A generative approach would be just typing, 'Write an essay on Halloween's origins.' The model creates something from scratch, which can sometimes be decent but lacks depth or accuracy. A transformative approach, however, involves collecting research material from credible sources, like snippets from articles or videos on Halloween, feeding it to the model, and asking it to synthesize these points into a cohesive essay. This way, the model's response is more grounded and reliable.