About WebTranslateIt.com

WebTranslateIt.com is a web-based translation tool to translate documents and software.

Learn more at WebTranslateIt.com.

Subscribe to our newsletter

Get our monthly newsletter about WebTranslateIt’s latest features.

Recent posts

We’re hiring an experienced Ruby on Rails developer (remote)

Posted by Edouard on September 3, 2021

Hi there! We’re looking for a remote experienced full stack Ruby on Rails developer for constant long term work with at least 3+ years work experience. Ideally we’d like to start with you doing a few tasks as a freelancer and see how it goes from there.

About us

WebTranslateIt is a completely remote, bootstrapped, profitable SaaS company launched in 2009 and built with Ruby on Rails. Our software helps hundreds of software companies manage their translations in order to reach new markets. You can read more about our history here.

The team is small — in fact it’s only Estelle who does the administrative and financial part and I, Ed, working on software development (I’m also the founder and CEO). After more than 12 years working on WebTranslateIt’s code I am looking to step back from programming to give me time to steer and grow the company. The app is used by hundreds of companies all over the world.

The software is large and complex, but we designed it modular. For instance, all the language file parsing code (we support over 40 different file formats) is in a separate rubygem library. Same thing with the code to connect to the machine translation APIs, or with the code that verifies if a translation is semantically valid or not.

Our stack: Ruby on Rails, nginx, passenger enterprise edition, Postgres, delayed_job background workers and Sphinx search for full-text search. Our stack is running on 2 high performance bare metal servers (one server hosting our app and files, the other hosting our database).

Although our software has grown relatively complex over the years, I strongly believe on keeping software as simple as possible and I think that in software, boring is a feature. We’re running on a vanilla Ruby on Rails stack with a stock Postgresql database. We’re not looking for a developer who pushes us/fights us to use one of the latest technology fads which would make the software harder to maintain. Everything is a trade off. With technology advancing and processing power getting cheaper we’re looking at simplifying our software stack by removing dependencies. For instance, we’d love to be able to use Postgresql’s now built-in full text search instead of relying on Sphinx. Sphinx has been good on us, but it also adds a certain layer of complexity (indexes, re-indexes via background jobs, etc).

In clear, we’re not against using newer technologies, but we know every solution has its pros and cons, and one of the big pros for us is simplicity and maintainability. We’re looking for a developer having enough maturity and experience to be able to understand, respect and agree with that.

About you

The work will be done remotely but since we are looking for someone able to do some tech support you must be situated in an area between GMT-3 to GMT+3.

You must:

  • be proficient in English,
  • have 4+ years experience in Ruby on Rails,
  • be willing to learn,
  • know how to work with jQuery and Javascript
  • be independent, self-learner and have a can-do attitude
  • be courteous and respectful, especially with our customers.

Some of the tasks you will undertake:

  • upgrade the Ruby on Rails framework and dependencies,
  • upgrade the code to integrate some of Rails’s newest technologies,
  • fix bugs with our Stripe integration,
  • integrate a revamped homepage working with a designer,
  • answer technical customer support, file a ticket if there is a problem and fix the bug,
  • be on call every now and then (only once you’re proficient in the technical support and with financial compensation)
  • build a file parser for a new file format that we need to support

Interested? Send me an e-mail introducing yourself at support@webtranslateit with a resume and a link to your GitHub profile page.

Why we’ve added Google ReCaptcha to WebTranslateIt

Posted by Edouard on April 15, 2021

Hi there!

Just a quick post to announce that we’ve added a Google ReCaptcha on some of our pages. We know it sucks, but just let me explain why we installed Captcha and where, and which version of the Captcha we’ve installed.

Why we’ve added Google ReCaptcha

A few months ago we noticed a huge spike of spammers creating user accounts and spamming other users through the discussions feature. We don’t like people sending spammy e-mails on our behalf, and it didn’t want this to have an impact on our e-mail sending reputation, so we started banning these users, but hundreds of new ones were continuing creating accounts every day!

Since the past 10 years or so we used to use a honeypot (or negative captcha) which was working well, but nowadays it seems robots can figure it out, and some users started to have problems using the pages with the honeypot. As it turns out, new versions of Google Chrome now autofills the fields that are used for the honeypot. We were very reticent to using a solution like Google ReCaptcha, because it is annoying and invades privacy, but the situation left us with very little choice. If you have a better alternative don’t hesitate to let us know!

Where we have implemented Google ReCaptcha

We’ve implemented captcha on 3 pages: the sign up page, to fight against spammy accounts creation, the recover password page, to fight against spammy “I forgot my password” actions, and on the Support Request page, to fight against spammy support requests. The Google Captcha code is only included on these 3 pages. If you are browsing another page, your won’t be tracked by Google’s ReCaptcha servers.

Which version of ReCaptcha we’ve installed

Finally, there are different versions of Captcha. Some of them are a bit intrusive to user’s privacy, others are very intrusive to the user’s privacy. We installed the one with the least implications to our user’s privacy. We use the “v2 I’m not a robot” captcha and it the code is only installed where we absolutely need it (so, as we said, on the sign up, recover password and support request pages).

Again, don’t hesitate to let us know if you have other suggestions to improve this. Unfortunately this came out as a necessity to keep the website running.

How is it going

We’ve implemented Captcha a few months ago now and we thought it would be interesting to share how these protective measure have been working for us. Frankly, it has stopped most of the spammy accounts we created, but not all of it. We’re still surprised that some spammers spend time and money to create accounts on WebTranslateIt for advertising products, although we have very few public pages where these links would be displayed anyway.

So we still have a few spammy accounts getting created each day, and we delete them manually as a daily routine.

With that, thanks for reading!

Our Path to Carbon Neutrality

Posted by Estelle on September 29, 2020

Back in 2017, we realized that even though we aren’t a big company, the role our activity played in global warming wasn’t insignificant.
We work remotely so we don’t have to commute, our servers are hosted in an already carbon-neutral datacenter in the Netherlands (Leaseweb’s amazing AMS-01) and to our surprise we discovered that the biggest impact didn’t come from our travels, electricity or water consumption but from the manufactured goods we are buying, especially high-tech ones like laptops or phones.

We looked for solutions to compensate our carbon footprint and there are quite a few non-profit organizations that will offer you to participate in projects helping develop sustainable energies all over the world or even just initiatives as simple as reforestation.

We decided to get involved with the Fondation Goodplanet - created by the photographer and environmentalist Yann Arthus-Bertrand - whose Action Carbon Solidaire Programme was created in 2006 with the aim of combating climate change by developing sustainable and economically viable alternatives to polluting activities, to the benefit of the most disadvantaged groups.

To this day we keep making daily efforts as individuals to change our consuming habits and not throw anything away that can be repaired and as a company we keep collaborating with the Fondation Goodplanet by donating money every year to finance a variety of down-to-earth and exciting projects such as developing sustainable agriculture and forestry, encouraging biodiversity conservation and restoration or creating bioclimatic schools in developing countries.

Join us in the fight, we believe climate change is not a fatality.





Want to calculate your carbon footprint? Go to CarbonFootprint.com

Want more? Follow us on Facebook and Twitter

Rate limiting on the String API

Posted by Edouard on December 17, 2019

Hey there!

Last Thursday the 12th of December in the morning WebTranslateIt was hit with severe performance issues. The website and the API became unavailable at time.

Every 10 minutes, we received massive waves of requests (more than 400 requests per seconds on the API), which made the web server very slow.

Upon investigation, we found out that the performance issues were due to some users using automated tools performing requests on the String API. We always intended this API to be rate limited (in fact, our documentation has always stated that we limit requests up to 25 requests per seconds), but we noticed that some users were using the API way over 100 requests per seconds, so our rate-limiting was clearly not working.

We are now properly limiting this API up to 25 requests per seconds with bursts up to 30 requests per seconds. We think this is a very reasonable limit given the fact that this API serves up to 250 segments and all their translations, and requesting over 30 requests per seconds to one web service really is doing a lot of requests.

If you get a HTTP error code 429 Too Many requests it means that you are being hit by our rate-limiter. The only way to avoid this limit is by reducing the amount of requests you are doing to that endpoint. You can do that by implementing caching or by introducing a few milliseconds pause on the device performing the request. We limit requests by IP and by organization. If you cannot use caching nor can you pause the server doing these requests, are you using the correct API for this job? The File API for instance, lets you download all the segments of a file in one request.

Properly rate-limiting this API is essential. Having no rate-limit to this API results in a unreliable API and service in general.

If you need any help or have any issues with our API, don’t hesitate to contact us.