Sessions and Slides
Introduction to LLM Sessions
Images + Stable Diffusion Sessions
Designing with Emerging Technologies: Generative AI
Semester: Spring 2025
Type: 3 credit seminar
Instructors: Chaki Ng and Elisabeth Sylvan
TA: TBD (if interested, email instructors)
Schedule: Wednesdays 9:30-12:30 (from 2/19 to 5/14/25)
Location: TBD
Capacity: 20 students
Eligibility rules: graduate level
Category: This is a new and experimental course for working with the latest emerging technologies. Topics will vary year to year. For Spring 2025, it’s Generative AI.
Course Description:
Cut through the hype and excitement surrounding generative AI by understanding for yourself what these tools can and cannot do. Through this course students will learn to understand, design, and build with generative AI. The class is a mix of theory and hands-on work. Students develop practical skills in designing, building, and testing with generative AI. Readings and discussions address key concepts in AI, their ethical implications, foundations in designing AI interaction, and implications for creators. No previous experience with either the theory or use of AI is required but students will need to learn to use the tools through basic course tutorials and independent research and experimentations.
(To be determined) Possible materials budget of ~$100 for AI tool subscriptions / buying credits.
Topics
Module A: Foundations + LLM
In this module, we will cover the basics of Generative AI by looking under the hood a little bit. The goal is to understand what the “magic” is and set expectations properly. We will also work with the current LLM tools, understand the difference in models, and doing prompts.
Generative AI 101
- Before AI, how programming typically work
- Machine Learning: use the data to program
- LLM Basics (Neural Networks, Embeddings, Training, etc.)
- Prompt Engineering Basics
Generative AI 102
- Model Comparisons
- Chatbot and Assistants
- Different Modalities
- System Prompts
Module B: Image Models
In this module, we will explore AI tools that could be useful for your design workflow. The goal is to identify tools that look promising to help you become a better / more productive designer in at least one area (e.g., ideation).
Image Models
- How AI generates images
- Stable Diffusion and other models
- Finetuning your own models
3D and Video
- Sketch-to-3D Models
- Video Models
- 3D Photogrammetry
- Other Emerging Tools
Project #1: Design Concept Webpage with Workflow Video
All the assignments to this point are parts of this project. You will make a simple page (web tool/platform TBD as we want to all host these at the same place), documenting the brief, user research, moodboard, and illustration of the final concept… plus, last but not least, you will make a short YouTube video that showcase your workflow. So taking video captures / recording throughout these modules are important.
Module C: Coding with AI
In this module, we will do some basic coding with the assistance of AI. The goal is to do continue doing something uncomfortable, while learning the basics of software / web design.
Coding
- Coding basics (e.g., HTML / CSS / Javascript / Typescript / Python / Flask / React)
- Coding Assistants
UI / UX
- UI basics (e.g., web)
- Generative UI
Module D: Designing AI Interactions
In this module, we will explore what’s ahead with AI-powered design. The goal is to imagine (or re-imagine) new or existing products (software and/or physical) that can be realized or enhanced significantly with AI.
Human-Computer Interactions
- Basic History of Human-Computer Interaction
- Opportunities / Challenges of Designing new AI-powered Interactions
Designing new Interactions
- Rapid prototyping and testing
- Leverage AI tools learned to date
Project #2: AI Interaction Concept
You will think big to try to come up with one novel interaction paradigm and communicate it. Given the paradigm may be difficult or even impossible currently to implement, these types of deliverables are all negotiable at this point:
- Clickable prototypes (e.g., in Figma)
- Working prototypes (e.g., web app; physical objects)
- Hybrid (e.g., concept video with physical objects but edited digital inputs / outputs)
Tools
This is a working and growing list of tools you might learn and use. Giving its size, we will cover just a subset of these in-class. Everyone is also encouraged to share new tools with the class together.
LLM Apps: ChatGPT, Claude, Gemini, Llama
LLM Apps (Dev): Google AI Studio, OpenAI Platform
Image Models: Midjourney, SD / SDXL, Flux, Visual Electric, Ideogram
Image Apps: Google Whisk
Run AI Online: Replicate, fal.ai, Runpod, RunDiffusion
Run AI Locally: Ollama, LM Studio, Stability Matrix
Image UIs: Automatic1111, Forge WebUI, ComfyUI
Model Directories: Civitai (has NSFW), HuggingFace
Video Models: Runway, Luma Dream Machine
Video Apps: DomoAI,
3D Apps: Viscom, Tripo, Spline
Coding: P5.js, Python, Figma, GitHub, Cursor, Replit, v0, Claude
Python Frontend: Streamlit, Gradio
Gen UI: Uizard, Galileo, UXPilot
Photogrammetry: Guassian Splats, Scaniverse, Polycam
Computer Vision: OpenCV, Teachable Machine
Speech: ElevenLabs
Animations: Touch Designer
Hardware: Rasperry Pi