Still on the #100DaysOfCode continuing with the HTML and CSS track, but also now learning about Flexbox. As I have been spending time trying to improve the design of my web app by exploring codepen.io and this other cool web design site, the first principle hit me again. It’s still about meeting people’s needs.
My partner and I have been talking about this new site that has been trending yet again on Twitter. Since I heard of this AI meme generator, I have since been flooding the chat with my partner with AI-generated memes. Suffice it to say that she’s not all that impressed. Ha!
I was telling her that even though the site had no fantastic animations or strong color palette, it still attracted a whole lot of users — all owing to the fact that it does entertain people — very useful especially amidst this pandemic where millions of people are in their houses for long period of time.
This web app has been such a success that in fact, they have already found a way to monetize it. As an example, there’s an option to make a meme prediction by adding your own prefix text, like so:
But, if the prefix text is personal, the following message is displayed:
You might think “ah, that must have taken so much time to make and must have been very hard to program” — well, let’s see what the creator has to say:
All the code for this monetized meme generator is in his public github repo and his end-to-end explanations on how he came up with it in an article. So essentially, you can clone his repo and train your own model based on his approach, right now!
I found it fascinating that the memes that were AI-generated made sense and were actually funny. It meant that the dataset used to train the model was very large (960,000 meme captions from 24 meme formats in imgflip, to be exact) — and this is the thing that I found funny. How we have all these thousands of meme content really gives you a picture of our priorities as a society. Ha!
During the first few days of my #100DaysOfCode track, I began my project on building an AI app that generates acrostic poems. The output from the model I trained were pretty much non-sense. They weren’t 100% gibberish, but not very poetic either. Here are some examples:
These lines were generated from a model trained on about 2k lines of Arthur Rimbaud’s poems. As this model shows, 2000 lines isn’t really much when it comes to neural networks, or even machine learning.
Models in neural network and machine learning rely heavily on volume and quality of data. If you don’t have much data, it would just be overkill and would not make business sense to do machine learning — it would make more sense to use statistical methods (e.g. Bayesian).
With all this in mind, I will be deploying my professional web app within the weekend, even though I had just started with HTML and CSS last week. Exciting!