Hello Blog

Hello, World

Hello world. So, I have started a blog. The reasons for starting it are various. On the Google TRC cloud website, it said I was expected to share my work with the world, be it via projects, papers, or blog posts. As such I thought that by not having a blog I was missing something important as a self proclaimed researcher. The other reason is that one of my friends also made a blog, and I figured it would be a fun project.

I also did it because I wanted my stuff to be usable by other people. The number of academic papers that don't link any code is infuriating. I personally believe that concepts are great, but without details of the implementation, it is just useless. Especially when the results are dependent on the implementation details. As such, you will likely see a lot of my things being open source and easy to download on GitHub.

The Vision

The last reason was my specific vision for the layout of a website: something academic but unique, with interesting and semi-related visual effects in the background.

The neural network on the front page I thought would be neat considering a lot of my work is with computers and ML. Seemed like a fitting idea. The first idea was to “train” a ghostly copy of the user — essentially give a small neural network your movements and try to predict them. Would be an interesting effect where you train your double. But I figured it would be too difficult, and not exactly mobile-friendly. So I decided on a typical neural network visualization (though I may use the neural network double idea later if it somehow relates to a project).

Quick technical note: the neural network on the first page reads four inputs derived from your mouse/touch position and page scroll, which the code transforms with sine/cosine functions to produce rapidly changing, oscillatory signals. Learning is intentionally brisk (the code uses LEARNING_RATE = 0.15 with a sign-based, clamped update), and activations use tanh so values sit between -1 and 1. When you stop interacting the network gently decays. Activations, biases and weights are multiplied by ~0.99 each frame and tiny weight "sparks" are preserved so the network never fully dies. Admittedly, the network is built only to look interesting and is essentially incapable of learning anything, but it still does look cool.

As for the background on this page: behind the text is a toy model of packet-switched routing. Nodes drift slowly across the screen, connected by edges. The small red pulses traveling along them are packets — they spawn, route through the graph, and die. Click or tap near an edge to break it, and the packets will reroute around the damage. Hover over a node to inject local traffic and watch it congest. It's the simplest of the three backgrounds, but I think it captures the idea well.

Efficiency as a Point of Pride

It goes without saying, but I try to make the entire webpage and all of the visual effects as efficient as possible. I try to write optimized code all the time, and it is just a personal point of pride to say my website is thousands of times quicker than another website.

And that's about it. Hopefully you will see more blog posts soon as I end up publishing my projects to arXiv. I have a couple of projects that are nearing completion, so it shouldn't be too long.