<    March 2017    >
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 _2_8 29 30 31
00:04 BoogieMan joined
00:31 edge226 joined
00:34 raspado_ joined
00:35 raspado_ joined
00:36 roelmonnens joined
00:46 sz0 joined
00:53 svm_invictvs joined
01:03 edge226 joined
01:05 daxelrod joined
01:19 white_knight joined
01:31 roelmonnens joined
01:47 roelmonnens joined
02:15 daxelrod joined
02:53 kushal joined
02:57 roelmonnens joined
03:00 SkyRocknRoll joined
03:24 kulelu88 joined
03:39 EyePulp joined
03:49 svm_invictvs joined
04:05 jud joined
05:29 svm_invictvs joined
05:33 RemiFedora joined
05:37 hphuoc25 joined
05:44 tavish joined
05:54 hphuoc25 joined
06:24 masber joined
06:45 pila joined
06:47 hphuoc25 joined
06:52 roelmonnens joined
06:54 raspado joined
06:55 hphuoc25 joined
07:02 WindChimes joined
07:08 tavish joined
07:15 WindChim_ joined
07:20 roelmonnens joined
07:42 pila joined
08:01 WindChimes joined
08:04 bf_ joined
08:20 BrianMiller joined
08:22 Dave_R joined
08:31 fakenerd joined
08:38 fractalsea joined
08:48 programmingcool joined
08:59 drbobbeaty joined
09:04 rendar joined
09:13 bf_ joined
09:15 ChoHag left
09:29 ibianchi joined
09:29 winem_ joined
09:34 tavish joined
09:46 fractalsea joined
09:55 fractalsea joined
10:20 ppang joined
10:25 soveran joined
10:28 kulelu88 joined
10:29 ppang_ joined
10:45 rchavik joined
10:47 hphuoc25 joined
11:01 Mr__Anderson joined
11:21 felixjet joined
11:22 tavish joined
11:31 drbobbeaty joined
11:42 ogny joined
11:43 winem_ joined
11:55 EyePulp joined
11:59 soveran joined
11:59 wlightning-fuel joined
12:10 fractalsea joined
12:20 fractalsea joined
12:39 sanyo joined
12:40 _Wise__ joined
12:46 fractalsea joined
12:53 cyborg-one joined
13:06 fakenerd joined
13:19 fractalsea joined
13:32 fractalsea joined
13:34 lukasdboer joined
13:36 EyePulp joined
13:50 edge226 joined
13:51 shinnya joined
14:13 fractalsea joined
14:32 daxelrod joined
14:33 fractalsea joined
14:37 dblessing joined
14:39 fractalsea joined
14:43 etehtsea joined
14:47 fractalsea joined
14:58 sz0 joined
14:59 al-damiri joined
15:32 wlightning-fuel joined
15:33 wlightning-fuel joined
15:45 raspado joined
16:02 roelmonn_ joined
16:03 shinnya joined
16:08 wlightni_ joined
16:12 tavish_ joined
16:14 tavish_ joined
16:14 soveran joined
16:14 soveran joined
16:19 tavish_ joined
16:21 tavish_ joined
16:28 tavish_ joined
16:31 djanowski_ joined
16:33 edge226 joined
16:40 fractalsea joined
16:40 svm_invictvs joined
16:46 djanowski joined
16:51 wlightning-fuel joined
16:55 fractalsea joined
17:02 wlightning-fuel joined
17:19 fractalsea joined
17:23 soveran joined
17:23 soveran joined
17:39 hashpuppy joined
17:53 sanyo joined
17:55 timg__ joined
18:14 dblessing joined
18:15 sz0 joined
18:22 minimalism joined
18:36 wlightning-fuel joined
18:53 GreenJello joined
18:54 pila joined
18:54 maxmatteo joined
18:55 <danemacmillan> If I'm looking to build a dedicated redis server--what are some good rules of thumb when it comes to allocating the resource--like hardware?
18:55 <danemacmillan> Do i need a lot of cpu, memory, or disk space, etc?
18:56 <danemacmillan> I will only be running one instance of it
18:56 <maxmatteo> always keep in mind you are ram bound..so it really depends on your payload
18:57 <maxmatteo> redis is also singelthreaded
18:58 <danemacmillan> So could I get away with 2cpus and say 24GB ram, and say about 50GB SSD?
18:58 <maxmatteo> usually you should keep at least 30% headroom
18:58 <maxmatteo> on ram
18:59 <danemacmillan> What would that look like?
18:59 <danemacmillan> I typically see about 30k sessions a day
18:59 <maxmatteo> ssd depends ...if you would like persistence to disk
18:59 kulelu88 left
18:59 <danemacmillan> I go with redis so sessions aren't lost if the daemon reboots
18:59 <danemacmillan> So I would want the persistence to disk
18:59 <maxmatteo> sure thing
19:00 <maxmatteo> sounds enough..as far as i can tell
19:00 <maxmatteo> get used to redis-benchmark
19:00 <maxmatteo> you can specify size/clients etc
19:00 <maxmatteo> and see some performance stats
19:00 <maxmatteo> against your redis server
19:00 <danemacmillan> Okay cool
19:01 <danemacmillan> What did you mean when you said 40% headroom on ram?
19:01 <maxmatteo> usually redis is fast enough ;)
19:01 <danemacmillan> 30%
19:02 <maxmatteo> your total size of all saved data in redis should be around max 17gb
19:02 <maxmatteo> but it always depends on what exactly you are doing
19:03 <maxmatteo> make sure u use 64bit os
19:03 <maxmatteo> what kind of client are u using?
19:03 <maxmatteo> and language?
19:04 raspado joined
19:04 <danemacmillan> Okay, so if I have a machine with 24GB, I should probably configure redis to use no more than 17GB?
19:06 <maxmatteo> yeah i would start with that
19:06 <maxmatteo> you can always use an eviction policy that will auto flush oldest entries as an example
19:06 <maxmatteo> redis offers different stratregies for that
19:07 <maxmatteo> are you storing serialized json?
19:07 <maxmatteo> php?
19:07 <danemacmillan> No, mostly just regular string data
19:07 <danemacmillan> Yes, php, and 64bit
19:07 <danemacmillan> php 5.6
19:07 <maxmatteo> i am doing a redis thing in php too
19:07 <maxmatteo> go with predis package
19:08 <maxmatteo> for the beginning
19:08 <maxmatteo> or are u using the native php session->redis handler?
19:09 <danemacmillan> It's baked into the framework (Magento). It's not the native one. I'm installing the pecl-redis extension
19:09 <maxmatteo> ah i see
19:09 <maxmatteo> i usually try to avoid pecl packages
19:09 <maxmatteo> :D
19:10 <minus> how many concurrent sessions (peak) do you expect?
19:10 <danemacmillan> There's no option with the framework.
19:10 <maxmatteo> ok
19:10 <minus> and what do you store in them
19:11 <maxmatteo> sometimes its nize zo gzip your values
19:11 <maxmatteo> using something like this for magento sounds interesting: https://github.com/colinmollenhour/Cm_RedisSession
19:11 <danemacmillan> Maybe about 300 concurrent users--and most of it session info (in one db), and some caching that the framework generates (like xml files, some html).
19:12 <danemacmillan> maxmatteo: I'm using that
19:12 <maxmatteo> 300 is not a lot
19:12 <danemacmillan> Nope
19:12 <maxmatteo> one app server and one redis?
19:12 <maxmatteo> lik one-to-one network wise
19:12 <minus> you can probably save the cost of a dedicated redis machine
19:12 <danemacmillan> That's the base setup, yes
19:12 <maxmatteo> or go with a cloud server from digitalocean,aws etc... :D
19:13 <danemacmillan> It has to be dedicated, becasue the app server can run on any number of instancs
19:13 <minus> yeah, but you can still run the one redis instance on the app server
19:15 <maxmatteo> but if you switch app servers your cache on the new server would be could
19:15 <maxmatteo> replication is an option
19:15 <maxmatteo> but things are not getting easier i guess
19:15 <maxmatteo> ..cold
19:16 <danemacmillan> Okay, when you say concurrent, maybe you don't mean users on at the same time. My current redis instance processes about 110k calls per minute
19:17 <danemacmillan> The app server is a cluster of servers, and there can be anywhere between 2 and 6 at any time.
19:17 <maxmatteo> ok, thats an important info
19:17 <maxmatteo> usually you should take a proxy on each app server
19:18 <maxmatteo> to keep open redis connections small
19:18 <maxmatteo> like twemproxy as an example
19:18 <minus> do you have a variable number of app servers?
19:18 <danemacmillan> minus: yes, I will.
19:18 <minus> okay, that's a different story then
19:18 <maxmatteo> ;)
19:19 <minus> we have a 2-server failover redis setup and use HAProxy to route to the active one
19:19 <minus> twemproxy does the same iirc
19:21 <danemacmillan> Alright. I don't think we're ready to load balance redis, but our app servers are definitely crunching a lot, so it warrants the added complexity. Can I get away with a single redis server with say 4 cores and 24GB ram based on the information I gave?
19:21 <maxmatteo> well haproxy wont aggregate you connections on app side...
19:21 <maxmatteo> but this is only important using stateless php as an example
19:22 <maxmatteo> sounds as a good starting point, have seen worse sizing plans :D
19:22 <minus> danemacmillan: one core will do ;)
19:23 <minus> and i'd say 24GB is massive overkill
19:23 <danemacmillan> If you can recommend another hardware config, let me know. I'm not married ot the config--just feeling it out.
19:23 <minus> redis is single threaded
19:23 <danemacmillan> I'd love to save money, but if the hardware demands are higher, I will pay it.
19:23 <minus> it won't touch more than one core
19:23 <minus> i guess you can start big and scale down
19:24 <minus> since LRU eviction isn't really an option for you
19:24 <danemacmillan> I do have that luxury once I'm done the migration, so I can maybe find something in the middle
19:24 <danemacmillan> LRU eviction isn't an option?
19:24 <minus> but scaling down involves downtime, so that's not great
19:24 <maxmatteo> as in the beginning: you are usually ram bound
19:25 <maxmatteo> and dont let anyhting else run on the redis node
19:25 <minus> danemacmillan: yeah, if you've got your RAM full of active sessions you can't really kill any of those
19:25 <minus> for cached content however it's a different story
19:25 <danemacmillan> No, this box will JUST run redis
19:25 <maxmatteo> where are you storing your sessions right now? php file handler?
19:26 <danemacmillan> In redis--but it's all on one giant machine
19:26 <minus> oh, so you already do have an idea how much memory redis uses
19:26 <maxmatteo> and you are serving all your session from there?
19:26 <minus> (redis-cli info memory)
19:26 <maxmatteo> ..yeah
19:26 <danemacmillan> And I'm in the process of building a new system with more moving parts, so we can target our growth a bit easier
19:26 <maxmatteo> ;)
19:26 <maxmatteo> tell us ;9
19:27 <maxmatteo> total connected clients might also interesting
19:29 <danemacmillan> Getting that info--one sec
19:30 <danemacmillan> memory used is 9.34GB, connected clients is 30
19:31 <danemacmillan> All caches were busted five hours ago, so that's what accumulated since then
19:31 <danemacmillan> It probably won't grow much more, though
19:31 <maxmatteo> u are doing fine..
19:31 <maxmatteo> how much ram has that server?
19:32 <danemacmillan> Too much
19:33 <danemacmillan> About 106GB. 60GB is in use right now
19:34 <danemacmillan> So based on this info, what hardware would you throw at this? 1 core?
19:35 <minus> redis cannot use much more than one core
19:35 <maxmatteo> your dual cpu setup will be fine
19:35 <maxmatteo> yeah keep in mind
19:35 <minus> saving to disk forks, so that can use a second core
19:36 <maxmatteo> redis will not use your 16 cores or whatever cpu :)
19:36 <maxmatteo> so 10 gigs wills perfectly fit within a 24gig server
19:36 <danemacmillan> Alright cool. My redis config is current set with maxmemory 10g
19:37 <danemacmillan> And should I ensure that the hard disk has enough to store anything from redis if there's a reboot?
19:37 <maxmatteo> i would raise that ...some time you will hit that limit...and then your clients will get mad :D
19:37 <maxmatteo> yeah keep twice the size of the ram available on disk
19:38 daxelrod1 joined
19:38 <minus> i'd set maxmemory much higher and leave it running a bit longer and see how far it grows
19:38 <minus> note to self: acquire redis memory usage metrics
19:38 <minus> everyone needs more metrics
19:39 <maxmatteo> i can recommend this on an internal ip: https://github.com/junegunn/redis-stat
19:39 <danemacmillan> I'll bookmark that
19:39 <maxmatteo> if you like to have a quick and easy visual check
19:40 <danemacmillan> So 2 cores, around 20GB ram, and 50GB SSD would be more than sufficient for my needs?
19:40 <danemacmillan> Should I add more cores? I'm just running redis on it.
19:40 <maxmatteo> no you dont need more cores!
19:40 <danemacmillan> Even for writing to disk?
19:41 <maxmatteo> should be fine
19:41 <danemacmillan> Alright
19:42 <danemacmillan> Thanks for getting all this down for me
19:42 <danemacmillan> One last thing, should I set vm-max-threads?
19:42 <danemacmillan> Or just completely disable the vm stuff?
19:43 <maxmatteo> disable it
19:43 <maxmatteo> " Redis VM is now deprecated. Redis 2.4 will be the latest Redis version featuring Virtual Memory "
19:44 <maxmatteo> and run latest redis stable
19:44 <maxmatteo> 3.2
19:51 <danemacmillan> Alright guys--I'm off to apply some of this stuff now
19:51 <danemacmillan> Thanks again
19:56 <maxmatteo> sure
20:02 <minus> maxmatteo: i rather pipe some of that info into something that lets me make a nice graph in grafana
20:11 cyborg-one joined
20:12 <maxmatteo> sure thing...
20:13 <maxmatteo> depends on what you are lokking for
20:13 flexd joined
20:13 <minus> yeah, metrics from different systems too
20:26 winem_ joined
20:36 svm_invictvs joined
20:41 maxmatteo_ joined
20:42 maxmatteo_ joined
20:43 maxmatteo joined
20:43 maxmatteo joined
20:45 maxmatteo joined
20:48 maxmatteo joined
20:52 maxmatteo joined
20:53 maxmatteo joined
21:02 soveran joined
21:02 soveran joined
21:23 daxelrod joined
21:26 rendar joined
21:26 rendar joined
21:31 Mr__Anderson joined
21:48 hahuang65 joined
21:49 hahuang61 joined
21:52 roelmonnens joined
22:01 whee joined
22:04 alphor joined
22:14 maxmatteo joined
22:22 masber joined
22:22 whee joined
22:23 wlightning-fuel joined
22:34 GreenJello joined
22:36 enigma_raz joined
22:53 pandeiro joined
22:53 white_knight joined
23:36 roelmonnens joined
23:49 BrianMiller joined
23:52 Azure joined
23:59 svm_invictvs joined