Learning by Doing: How to Organise an AI Workshop for Children
We organised an AI workshop for kids, where we explored the world of AI through image generation. We used various tools and techniques to create imaginative and artistic outputs, all while fostering creativity and critical thinking. The event was a huge success, with both children and adults actively participating and learning from each other.
This is a retrospective of the event: what we did, how to improve in the future, and a number of tips and ideas on what you can try right now on your own. From the toolkit to prompting and other practical tips, read and learn about AI-powered image generation.
AI workshop in action: If kids are anyway going to spend a lot of time in front of a screen, why not learn something and make things while at it.
We wanted to help kids to develop their relationship with technology
There’s no denying it. Tech has intertwined itself into our lives. The Internet was a revolution, smartphones changed how we live our lives, and now AI is breaking through to the not-so-tech-savvy masses. This development is not completely free of challenges. People spend their time indoors, glued to their screens, chasing endless dopamine hits. Adults are not immune to tech-induced zombieness, but it’s even more problematic with kids.
It’s hardly a solution to completely forbid the use of electronics. That’s a surefire way to exclude young people from society, but some kind of moderation is clearly needed. At the same time, I want my kids to use their creativity and express themselves. Would an indirect approach somehow improve the situation? If kids are anyway going to spend a lot of time in front of a screen, why not learn something and make things while at it. Like pictures. Pictures are cool.
Dragons are cool. Purple ones are extra cool.
Many AI tools are available for image generation, and I have been experimenting with them. We came up with the idea of this workshop to also teach media literacy and critical thinking. It was not hard to sell the idea inside the company. We did very low-key marketing and got a full house of kids in no time.
A masterclass in agility
We set up the workshop at our office. We wanted to make the kids and their parents feel welcome first. After that, we started the actual agenda. We asked the participants to plug into the wireless network, access the image generation software, and download the models used to generate the images. A base model is gigabytes, and more advanced ones like Flux hit double digits.
In hindsight, opening laptops only after the introduction would have made more sense. I understand that the web browser might be more interesting than a middle-aged nerd talking about the history of AI. Luckily, we had many colleagues present, and we were able to provide tech support. I also bribed my older kid to be present and do crowd control while helping with the tools.
What if your drawing was a masterpiece from the Renaissance age?
We talked about how computers handle pictures and what the pixels are. We showed how images are generated iteratively from static in steps. We talked about negative prompts, image-to-image generation, and how to enhance your own drawing with the help of computers. We had a nice guessing game trying to figure out if a picture was AI-generated or not. Spotting the generation clues is getting harder and harder.
We had to adopt a communication style that kept the audience engaged and accepted that the focus should be on examples and tools. We had good comments and questions from the audience, and overall, the kids behaved well after we got over the initial disorder.
Oh yeah. And the pizza break is great after a lesson like this. Looking back, we probably trashed more than half of our plans during the event. On the other hand, we did find interesting avenues of discussion and opportunities to explain why something works the way it does.
A toolset that allows everyone to participate – also after the workshop
The main tool we used was Dall-E. It works fast, has a reasonably good quality, and it’s unpredictable enough to keep things interesting. For MacOS users, we demonstrated Draw Things since it is a complete package. It does not require installing mysterious Python libraries and dealing with hairy dependencies.
For Android and Windows users, we suggest leonardo.ai because you can try the service for free. Most image generation action took place collaboratively on the shared screen with Dall-E. The local generation was too slow to be engaging, and some of the audience missed the finer points of model behaviour. At least they got to see how it works. I noticed some parents tinkering with it, too.
The way we used it, the AI was like a stream that developed the input further.
How does AI-powered image generation work?
I’ve demonstrated Dall-E capabilities multiple times. People have a tendency to ask it to generate something super simple – like just a single animal. Let’s say a lamb. A nice, generated lamb seems to impress some people, but it is fairly straightforward to take things to the next level. Ask the lamb to be green. Then, ask the green lamb to go skydiving. After that, ask the skydiving green lamb to play cricket while doing all this. At this point, people typically start presenting more creative ideas.
Dall-E does not use the same seed for image generation. There’s variation in the outputs as you iterate. More elaborate prompts will result in pictures with similar elements, but the process cannot be completely controlled. Also, at some point, it will meet limitations. The specified elements are just not incorporated into generated pictures, or their interpretation might not follow the prompt.
With more complex prompts, the concepts start bleeding all over the picture, and some input is just dropped away. The limits become more visible as you extend the prompts. Nevertheless, it is a very nice thing to demo due to the speed at which it performs. It can be used both as a toy and a tool and everything in between.
The shortcuts with Dall-E come with limitations. More control can be achieved through using models like Stable Diffusion and Flux. A local application is basically a UI that allows using models and managing them. As I said, I prefer Draw Things. It works with MacOS and offers more functionality than I have time to study. Automatic-eleven-eleven (A1111 for short) runs nicely with Windows. On Linux, I have not tried anything personally, but there’s a lot available. SaaS solutions tend to provide a UI and then a model or a set of models in the background. The amount of control and policies varies. That should get you started.
AI can blend ideas within the same theme.
Prompting 101: Ideas for unleashing the power of image generation with AI
At this point, you can play around with different techniques and mediums. A “masterful graffiti on a support beam of a bridge”, “cubistic fresco on the ceiling of a chapel”, or “renaissance masterpiece with chalk on fresh tarmac” all work quite well. Due to content policies – and rightly so – individual artists cannot be emulated. On the other hand, you can use expressions like “a promising young artist from the late 20th century demonstrating the peak ability of impressionist painting”. This will also teach something about art history.
Try giving it proverbs and mix them. See how it handles pangrams. Input poems, haikus, lyrics, or ask it to render images of beings that do not exist. Ask it to add more danger. Tell it to throw in a magical vibe. Instruct it to give the picture a dash of surrealism. Do these things multiple times. You will see things, and you will experience limitations. It is impressive.
If you want to come prepared, have a set of things you can put into prompts. Mix styles, eras, and techniques! Describe palettes, explain lighting conditions, try different perspectives, provide guidance on composition, or request different types of symmetry applied. Study photography and visual art! Your graphic vocabulary will develop, and you will start seeing the world around you differently. At this point, it’s no longer just playing around. It’s about developing knowledge of image generation.
I confess. I have a thing for abstract art. How about some cubism?
One of the things we did during the workshop was combining our own handiwork with AI. Local image-to-image generation was out of the question because we wanted to keep the workshop interactive. What we did instead was based on the multimodal ability of Dall-E. We did photograph drawings the participants did and asked ChatGPT to describe them as if they were pieces of skilful art.
Then, ask it to generate an image based on the description. You will most likely recognise the elements, but rendered with the skill of a professional artist. Again, play around a bit: ask it to be a bit more verbose and also describe the composition of more complex images.
If you try even half of the things suggested here, you know a lot about images and visual culture.
Adults were interested in the topic and had a lot to contribute
We offered the parents an opportunity to bring their kids and enjoy a few hours of fleeting freedom. Almost all the parents did stay. I did get a feeling that it was not because they were worrying about leaving their kids with strangers but because they were interested in the topic. I estimate that having more adults present made the overall situation much easier to handle. It was also nice to see parents taking an interest in the creative aspect of AI.
The adults also contributed through comments and questions. They demonstrated knowledge and curiosity in a way that is rare in the business setting. Most tasks we deal with during our 9 to 5 cannot always be described as creative. At the workshop, we allowed them to do something with their kids within a structure that allowed them to try things and be a bit silly.
Cleaning up and looking back
We had a total of three hours for the event. When we started nearing the end, we still had one trick up our sleeves. Image generation models excel in creating simple designs suitable for small stickers. We asked the kids for sticker ideas and then generated a bunch. We ordered sticker sheets from a sticker printing shop and mailed them to the kids to thank them for participating.
Hobbies, animals and food inspired the sticker designs.
We also told the kids to take the art supplies with them. We had a selection of different crayons, pencils, and markers to give. We also printed a lot of the images we generated together with the kids. The mess we made was surprisingly moderate, and we were able to clean up the office in a whiff. The kids told us they had fun and asked if they could return next weekend.
What’s next? We were quite conservative in marketing the event since the seats were limited. If there’s demand for this, we consider doing it again. All in all, it was super fun!