Skip to main content
Viktor Trapenok
Principal Engineer @ Nanoheal
View all authors

12 Years in Open-Source

· 2 min read
Viktor Trapenok
Principal Engineer @ Nanoheal

12 Years in Open-Source

(To summarize: approximately $8,000)

At the beginning of 2012, I started working on my first open-source project. It was a comet server written in C++.

The idea was simple: create a comet server with an API and a SaaS subscription service. The closest analogy is pusher.com.

I completed the project and even added more features than I originally planned. In synthetic tests, my server handled up to 64,000 simultaneous connections. I invested a significant amount of my time — around 2,000 hours of work, by my estimate.

The project attracted more than 20,000 registrations, and about 1,000 websites actively used my API. However, most of them stayed on the free tier. At its peak, the total number of concurrent WebSocket connections reached 8,000.

My largest client was the Russian federal TV channel OTR, and I even signed a contract with them for technical support.

But financially, the project was almost a complete failure. I wasn’t able to effectively market my solution. Over nearly 12 years, the total revenue from subscriptions amounted to about $8,000 from various users — barely enough to cover hosting costs.

The Reasons:

  • I did not have experience in marketing (I’ve since learned a lot, but it took time).
  • The process from start to paid subscription was too long.

But What Else Did I Gain?

  • I used this project as a topic for my diploma thesis at the institute.
  • I received dozens of freelance jobs for developing chats and video chats.
  • Over the years, I talked about this project in every job interview, as much as I talked about my other work experience.
  • I’m pretty sure this project helped me get a job at cube.dev, where I learned a lot.

The Indirect Benefits

It’s hard to quantify the indirect benefits. But without this project in my portfolio, how much harder would it have been to find work? How much less would I have been paid in my positions?

In Conclusion:

For anyone starting their own project, here’s my advice:

Start! It’s an amazing experience. But if, like me, you hope to earn from it, think about marketing and monetization from the start.

Reducing CI/CD infrastructure costs

· 2 min read
Viktor Trapenok
Principal Engineer @ Nanoheal

Reducing CI/CD infrastructure costs

I want to share how we at Nanoheal reduced our infrastructure costs by using GitLab runners in Kubernetes with Karpenter on spot instances.

At Nanoheal, we have a high level of developer activity, and CI/CD pipelines runs a critical role in our DevOps practices for continuous delivery.

Previously, we ran three powerful EC2 virtual machines in our AWS account, dedicated to hosting GitLab runners. However, since our team worked within the same time zone, these machines sat idle almost 16 hours per day, as most work was done during business hours. During the remaining 8 hours, the servers were often overloaded due to peak demand.

The Solution

I migrated GitLab runners to Kubernetes. We already had a Kubernetes cluster running in AWS, so I configured Karpenter to handle autoscaling within the cluster.

Karpenter uses taints and node affinity rules to determine on which nodes specific pods should run.

The configuration is set up so that Karpenter creates a node when a GitLab runner starts a pipeline. Then pipeline runs on a spot instance, specifically dedicated for CI/CD tasks. Our pipelines consume a significant amount of resources and can negatively impact other services if they share the same node in the cluster.

Once the pipeline completes, the node remains empty, thanks to the properly configured taints, which prevent other pods from being scheduled on it. After 30 minutes of inactivity, Karpenter automatically removes the node.

The Result

By migrating CI/CD pipelines to our Kubernetes cluster, we eliminated the need for three always-on virtual machines. Instead, we now run 1–2 spot nodes in the cluster, running ~60 hours per week, drastically cutting down our infrastructure costs.

You can get configuration files in my github

Any help?

Please ping me in linkedin if you need any help. I am open to work.

Integration Testing for Telegram Bots

· 4 min read
Viktor Trapenok
Principal Engineer @ Nanoheal

This is the text of my speech at the FAR EAST Devops DAYS conference about my experience with integration testing for Telegram bots. Specifically, I’ll discuss the challenges and solutions I encountered when testing a Telegram bot that I inherited as part of a project.

The Project Context

The bot was originally written in PHP using the MadelineProto library. For those unfamiliar with this library, it's quite versatile. It not only allows you to create traditional Telegram bots, but it also supports automating actions on behalf of user accounts, which opens up possibilities that regular bots don't offer, such as making audio calls.

When I joined the project, the bot already had a significant amount of code. However, there were no tests whatsoever. As you can imagine, this was a major productivity bottleneck since I had to manually test the bot’s functionality at the end of each workday. Writing tests became my priority to speed up development.

The Challenge of Finding Existing Libraries

My initial approach was to search for existing libraries that could facilitate the testing process. However, after some research, I found that most of the libraries available were written in Node.js, which wasn't ideal for my PHP-based bot. Integrating them would have taken considerable time and effort, and they wouldn’t have allowed me to test the bot directly through Telegram’s API.

My Solution: Testing via Telegram Web

Given the limitations, I decided on a simpler approach: testing the bot through Telegram Web. By interacting with the bot via the web interface, I could simulate user actions like sending messages and clicking buttons. To achieve this, I wrote a small set of functions/scripts that automated these actions, which I’ll share with you later in this post.

This approach was quite effective. The script would send messages to the bot, click buttons, and perform other necessary tasks, saving me hours of manual testing.

How It Works

Here’s a brief overview of how the testing script operates:

  1. Open Telegram Web.
  2. Inject the testing script into the browser’s developer console.
  3. The script then automates interactions with the bot, simulating a real user.

You can run the test, and while it’s running, you can relax with a cup of tea or work on something else. Once the test is completed, you get the results — whether the bot passed or if there were any issues.

Limitations

The main drawback of this approach is that it doesn’t provide a full testing cycle. Ideally, I would integrate this with Selenium to run tests automatically from a console or CI/CD pipeline, but so far, I’ve encountered issues with authenticating in Telegram via Selenium.

Another limitation is that Telegram has multiple platforms (iOS, Android, web), and they don’t always behave the same way. For example, the layout of messages may differ across platforms, which makes it hard to test the user experience consistently.

Key Benefits

Despite its limitations, this was the quickest and easiest way to introduce some form of test automation into the project, which initially had none. It significantly improved our development speed and confidence in the bot’s functionality.

Here’s a simple example of how you can test a scenario using the script:

  1. Send a message to the bot.
  2. Check the bot’s response for expected text.
  3. Click a button in the bot’s response.
  4. Wait for the next message and verify it.

Future Plans

One of my future goals is to integrate Selenium more effectively for even better automation. However, for now, this solution works, and it’s saving a lot of manual effort.

Conclusion

To summarize, if you’re working on a Telegram bot and need a quick way to set up integration tests, testing through Telegram Web is a simple and effective solution. Although it doesn’t offer full coverage and automation, it’s a great starting point when no tests are in place.


Feel free to ask any questions in the comments or reach out to me directly. Thank you for reading!