StoryPrinter: Tactile storytelling
09 April 2026
Welcome to StoryPrinter: Tactile storytelling with your kids, anytime, anywhere!
Here’s what I have been working on lately: An Android app that creates and prints
illustrated stories on the go.
StoryPrinter generates illustrated children’s stories using your input and OpenAI’s models.
Then, you print them on a portable thermal printer.
The result? A tiny picture book your kid can hold and fiddle with.
️
Check out the following demo video to see how it works:
The app
Long story short, you start by getting a supported thermal printer. Currently that’s only Phomemo T02. Then, you turn it on and pair it with your phone via Bluetooth. Next, you build and install the StoryPrinter app that you can find on GitHub. Create your OpenAI API key, add some credits and configure it in the app’s settings. Now you are ready to start… storyprinting!
Begin by providing a story seed, e.g. “A brave squirrel goes to space”. The app produces the text and pictures page by page. You may steer the story to different directions by providing feedback. Does your kid have a favorite hero or art style? Add a reference image to guide the illustrations. You can print the generated images by long-pressing on them.
Vibe coding
I am not a mobile app developer but I really wanted to create this app.
My availability for this project was limited and wanted to get this done in a reasonable amount of time, so I “vibe-coded” it, let the models go wild and see what they come up with. I used:
- GPT-5.1
- GPT-5.2
- Claude Opus 4.6
The app is not perfect, but it works and I am happy with the results. I tried to keep things very simple when working with LLMs in Agent mode, so my “stack” was pretty much:
- GitHub Copilot in Android Studio in “Agent mode” since September, using gpt-5.1 and gpt-5.2.
- Claude CLI the last month using Opus 4.6
Regarding the “process” I followed:
- Auto-commit: I did not let agents auto-commit any changes as I am very picky about when, how and what to commit.
- Code review: I would code review each commit myself, but unless there was something major I wouldn’t intervene.
- Tests: No tests, I did not ask for them and the LLMs didn’t proactively suggest creating them either.
- Tools: The agents were allowed to build the app, grep etc.
- Dead-ends: If the LLM would fail a few times, I wrote the missing pieces myself and not “force” them to figure it out.
- Cardinality: One agent at a time, nothing in parallel.
I am not 100% satisfied with the code quality or the architecture of the app.
However, I don’t think I would have done a much better job myself as I don’t usually develop mobile apps.
Additionally, I don’t feel I learned much from the process which means my next project will be “as bad” as this one.
In this case it’s not a big deal, since as I mentioned I am not an app developer.
However, it makes me a bit worried regarding the long-term implications of generative AI
on software development and craftsmanship.
Overall, vibe-coding “worked” as expected, but not how it’s often advertised. The app works and it’s a great prototype for personal use. I did not gain any domain knowledge, so my next app will not be better. No matter how good the models get, I will ask them the wrong questions if I try this again.

