If you are not a fan of heavy databases or any at all, Redis is the perfect one for you when you really need one.
1. In-memory database - Play with it, set variables, change them, close it whenever you feel like and nothing touches the disk unless you save explicitly. This is precisely why it is one of the fastest one around - it easily beats mysql and others in read/write cycle benchmarks.
2. Variables with timeout - Set and expire variables with a timeout, very handy for cache and other applications as the expiration is native.
3. Simple type of variables, learn in less than a day - It supports a handful number of variables like string, array and hashes, almost similar to what every programming language already has - so it feels second nature to deal with them in redis db, if the client library handles the type conversion itself. In python, redis-py does it well.
4. Batch read/write with pipelines - Run multiple commands in a single batch and return all the read operation results at once, which helps optimises the read/write operations but most importantly ensures that no other command run in parallel will run in between them.
1. No integer type - It has all those good old simple variable types, but still I miss the ability to just save some integers and get them back in the same form. Currently, you must store them as string and after getting it back, convert them back to int explicitly.
2. No array sub-operations - Arrays can be stored as such, but there is no option to change part of the list in-place, without replacing the whole one i.e. for eg. to replace the i-th value with j-th value.
3. Windows in not officially supported, only ported by MSOpenTech - As Windows doesn't enjoy official support and I am primarily a Windows user, I don't like the fact that new versions are usually not available for long time before they are ported over to Windows. Despite all that, MSOpenTech owes my respect for maintaining and porting it so well.
If you are looking for an alternative to Mysql, Redis is not it. It cant cope with large databases which can't be retained in memory.
Redis is more of a short and sweet thing that can handle relatively small number of variables with high throughput and is easy to deploy.
I am generally not a fan of heavy databases and didn't think I would actually need that at all, except when I needed - it was not because the data was large, but because it needed reads and writes from multiple processes running in two remote machines. I turned to Redis as I had heard that it is very simple to get started with and is really fast. And to my surprise, it really did work like a breeze.
For python projects, redis-py is the official redis client library and it mostly just maps from redis commands to methods, like get, set and hmgetall, but it also supports transactions and others.
The code went from supporting just one machine (variables in python) to N number of remote machines (via redis) in just one day. It became only a matter of installing redis-cli on all remote machines and reading the data off the master redis database.