<    April 2017    >
Su Mo Tu We Th Fr Sa  
                   1  
 2  3  4  5  6  7  8  
 9 10 11 12 13 14 15  
16 17 18 19 20 21 22  
23 24 25 26 27 28 29  
30
00:02 hahuang61 joined
00:44 dblessing joined
00:59 roelmonnens joined
01:39 soveran joined
01:49 hahuang61 joined
01:54 wlightning-fuel joined
02:44 daxelrod joined
02:48 roelmonnens joined
03:05 wlightning-fuel joined
03:28 rchavik joined
03:30 rchavik joined
03:32 rchavik joined
03:34 amosbird joined
03:37 sfa joined
03:40 soveran joined
03:40 soveran joined
03:42 RemiFedora joined
03:50 hahuang61 joined
04:02 shortdudey123 joined
04:20 enigma_raz joined
04:24 enigma_raz joined
04:42 soveran joined
04:45 SkyRocknRoll joined
05:03 EyePulp joined
05:03 ppang joined
05:04 hos7ein joined
05:06 cyclones92_ joined
05:08 lxkm_ joined
05:08 brycebaril_ joined
05:08 fakenerd_ joined
05:09 michel_v_ joined
05:09 rrva_ joined
05:09 irsol_ joined
05:09 ejnahc_ joined
05:12 averythomas_ joined
05:13 winteriscoming joined
05:13 shortdudey123_ joined
05:13 SirCmpwn_ joined
05:17 ahfeel joined
05:18 codedmart joined
05:20 lukasdboer joined
05:21 hahuang61 joined
05:22 SebastianFlyte joined
05:39 madgoat joined
05:39 madgoat left
05:40 soveran joined
05:40 soveran joined
05:51 soveran joined
05:58 shesek joined
06:08 rendar joined
06:11 tarkus joined
06:13 roelmonnens joined
06:55 hos7ein joined
07:12 Jarvis_ joined
07:22 Mr__Anderson joined
07:24 Jarvis185 joined
07:31 JohnnyRun joined
07:45 schndr_ joined
07:48 Dave_R joined
08:17 svm_invictvs joined
08:20 programmingcool joined
08:24 Guest96 joined
08:30 HelgeO joined
08:34 Jarvis185 joined
09:33 irclogger_com joined
09:33 Topic for
09:43 Pagan joined
09:43 Pagan joined
09:53 etehtsea joined
10:07 hive-mind joined
10:12 Mr__Anderson joined
10:15 rchavik joined
10:32 etehtsea joined
10:41 cyborg-one joined
10:48 ChrisJames02170 joined
10:54 Guest96 joined
10:56 chipotle joined
11:08 roelmonn_ joined
11:08 etehtsea joined
11:17 drbobbeaty joined
11:20 etehtsea joined
11:24 etehtsea joined
11:35 efphe joined
11:36 EyePulp joined
11:50 programmingcool joined
12:09 Guest96 joined
12:18 cyborg-one joined
12:38 CountryNerd joined
13:05 programmingcool joined
13:09 edvorg joined
13:14 forgotmynick joined
13:26 SkyRocknRoll joined
13:28 dblessing joined
13:28 Mr__Anderson joined
13:31 wlightning-fuel joined
13:36 roelmonnens joined
13:42 SkyRocknRoll_ joined
13:47 EyePulp joined
13:50 wlightning-fuel joined
13:54 rchavik joined
13:56 bannakaffalatta joined
14:01 shinnya joined
14:02 daxelrod joined
14:05 unbalancedparen joined
14:11 al-damiri joined
14:23 tavish joined
14:25 bannakaf_ joined
14:32 tarkus joined
14:33 sz0 joined
14:36 edvorg joined
14:47 parazyd joined
15:02 edvorg joined
15:05 pila joined
15:06 wlightning-fuel joined
15:08 minimalism joined
15:11 roelmonnens joined
15:15 daxelrod joined
15:20 <parazyd> is there such a thing as an "acceptable ammount of running redis instances/servers" on a machine?
15:21 <parazyd> or just go crazy with it
15:22 JacobEdelman joined
15:24 FunnyLookinHat joined
15:28 iamchrisf joined
15:28 steeze joined
15:28 winem_ joined
15:30 <Habbie> parazyd, RAM and CPU are your real limits
15:30 <minus> parazyd: the optimum is probably one per core
15:30 <Habbie> likely
15:30 <Habbie> depending on what kind of load they get
15:30 <Habbie> and what your reason for having multiple instances is
15:30 <Habbie> the only time i did multiple instances they all had the same data
15:30 <Habbie> so i could throw more than one cpu core at it
15:31 <parazyd> easier usage/navigation
15:31 <parazyd> ram isn't a limit theoretically
15:32 <Habbie> and cpu?
15:32 <minus> one instance is certainly easier to manage than a cluster
15:32 <parazyd> dunno, probably 4 cores
15:32 <parazyd> each one would keep specific data, in ~127 databases each
15:33 <parazyd> so depending on the specified query, i would call a specific server
15:33 <parazyd> i don't know how else i would navigate(?) if everything was on the same server, with this there is kind of a pattern
15:34 <minus> that's a lot of databases
15:34 <* parazyd> shrugs
15:35 <minus> ¯\_(ツ)_/¯
15:35 <parazyd> it's not much load from my current understanding
15:35 <parazyd> so it's doable
15:35 <parazyd> and in the 127 oer server there is a pattern, where i could use a python list or whatever to fill it up
15:35 <parazyd> s/oer/per/
15:40 <Habbie> but why not one instance?
15:41 dblessing joined
15:41 <parazyd> I don't know how to make navigation easy
15:41 <parazyd> let me show you what i'm talking about
15:41 <minus> please do
15:42 <parazyd> i'll paste a sprunge text, easier than spamming here. give me a sec
15:45 fakenerd joined
15:46 <parazyd> b
15:46 <parazyd> http://sprunge.us/IhDb
15:46 <parazyd> here's a blurb
15:47 unbalancedparen joined
15:48 <minus> why do you need to put each thing in a repo in its own DB?
15:48 <parazyd> because the key is the package name, and the value is another hashmap which i can use when i get it from redis
15:49 <parazyd> example file: http://packages.devuan.org/merged/dists/jessie-backports/contrib/binary-all/Packages
15:50 <minus> what exactly are you storing, and what's the purpose of storing/how do you access it? (search for it, get the hash by package name, etc)
15:52 <parazyd> each paragraph (^\n separation) in the Packages file is an entry. so the key is the package name (first line), and the value is the whole package info converted to a hashmap
15:52 <parazyd> i use it through python
15:52 <parazyd> the goal is to have overlays of these files. so i have one iteration of the Packages file, then overlay another one, and another one... etc
15:53 <parazyd> when done, i generate a new 'Package' file with the result
15:53 <minus> ah
15:53 <parazyd> maybe redis is the wrong tool here
15:53 <minus> yeah, you can do that in python
15:53 <minus> so what do you store in the different databases?
15:54 <minus> i'm still confused about that
15:54 <parazyd> i was thinking a Package file per db
15:55 <minus> ah
15:55 <minus> you can just encode that info into the key
15:55 <parazyd> what do you mean?
15:55 <minus> e.g.: jessie-backports:contrib:binary-all:<package name>
15:56 <parazyd> !
15:56 <parazyd> never thought of that
15:56 <minus> that's for storing only though, it's not gonna help you with overlaying
15:56 <parazyd> i can always have a temporary db where i do the overlaying
15:56 <minus> with overlaying you mean if you take 2 repos you first add all packages from repo 1, then all from repo 2 and if one already exists it'll get overridden?
15:57 <parazyd> no, the opposite
15:57 <parazyd> top priority gets overriden only if it's from the same priority
15:57 <parazyd> if lower, then it's dropped
15:57 <minus> sounds hard to do in redis
15:57 <minus> if possible at all
15:58 <parazyd> yes indeed
15:59 <parazyd> but i could use it for storing though
15:59 <minus> where do you get the priority from?
15:59 <parazyd> and then do the overlaying in python
15:59 <minus> you could, but it's probably got no benefit over loading it from the file
16:00 <minus> depending on the thing you're making works it might just be cheaper to keep the data in python in memory
16:01 <parazyd> yeah i don't know how expensive is the parsing
16:01 <parazyd> probably not much
16:01 <parazyd> i think the biggest file is around 40M
16:01 <parazyd> all in all, probably the first run is the toughest, which is about 4GB of data
16:02 <parazyd> afterwards, i just do diffs and update what is needed
16:02 JrWebDev joined
16:02 <minus> how often do you run that?
16:02 <parazyd> (that's another thing, that triggers this application)
16:02 <parazyd> whenever a Package file is updated
16:02 <minus> mh
16:02 <parazyd> but it's the first run that's expensive, as i said
16:03 <minus> so you DL the new Package and re-parse it?
16:03 <parazyd> yes
16:03 <parazyd> but only the one(s) that changed
16:03 <parazyd> which i can easily do in shell/python as well, by looking at http headers
16:04 <minus> so you'd run it as a script every time a Package is updated and produce the overlay of one (or hundreds) of specific configurations?
16:04 <parazyd> yes
16:05 <minus> one or hundreds?
16:05 <JrWebDev> ive never used redis. ive dabbled on couchdb but i have redis installed on my linux box. Can i grab data from an ldap server and place it within redis? I only want communication to ldap when an ldap changes. is it possbile to script this and have redis cache all information? This will be for a php web aplication
16:05 <parazyd> minus: what are hundreds in your context? files, or what's inside the files?
16:05 <minus> parazyd: overlay configurations
16:06 <minus> though that doesn't really matter much thinking about it
16:06 <parazyd> i got lost... :D
16:06 <minus> well, the configuration which overlays are used
16:07 <minus> and with which priorities
16:07 <parazyd> no that's a hashmap in python
16:07 <parazyd> 3 or 4 overlays
16:07 svm_invictvs joined
16:07 <parazyd> 0 is top, it's smallest, and 3 is lowest, it's biggest
16:08 <parazyd> anyway yes, perhaps it's good to avoid redis in this usecase
16:08 xep joined
16:10 <minus> if you want to avoid reparsing on every run, either load everything into memory and keep it running (pretty much equivalent to redis) or dump the parsed data to a pickle
16:10 <parazyd> yes it should be a daemon
16:10 <minus> but the first thing i'd try is to just parse everything every time and go with that if it takes just a few seconds
16:10 <parazyd> this is why i initially thought of redis
16:11 <minus> should it take care of downloading those files too?
16:11 <parazyd> if yes, then it's a daemon
16:11 <parazyd> if no then a cronjob can run a shell script
16:11 <parazyd> then inotify or the shell script can run this
16:12 <parazyd> i think i kill 2 birds with one stone if i do it as a daemon, and download with python
16:12 <minus> daemon sounds slightly better because no moving parts
16:13 <parazyd> yeah, and threads work well :)
16:13 <minus> yeah, threads should do
16:14 <minus> alternative: asyncio
16:14 <parazyd> ack :)
16:19 SkyRocknRoll joined
16:19 <parazyd> minus: thanks for the tips, appreciate it
16:34 tarkus joined
16:34 Dave_R joined
16:35 hahuang65 joined
16:45 Dave_R joined
16:48 edge226 joined
16:50 Mr__Anderson joined
16:56 Dave_R joined
17:02 maxmatteo joined
17:09 Fweeb joined
17:21 jdelStro1her joined
17:21 <jdelStro1her> Heya
17:23 <jdelStro1her> I'm looking at storing the number of requests per day, per ip. Any advice whether it's preferable to a) store a single hash per day, where the hash keys are the ip address, and hash values for the number of hits from that ip; or b) store a different key for every ip address per date ?
17:24 <jdelStro1her> ie a single "visits:<date>" hash with lots of values, or multiple "visits:<date>:<ip>" keys with an integer value
17:25 <jdelStro1her> I don't need to cross-reference any of these values, get all of them in one go, or anything like that
17:33 iamchrisf joined
17:36 <jdelStro1her> I'm mostly concerned about memory usage, but it would be nice to know about the performance characteristics of a hash vs lots of keys too
17:38 winteriscoming joined
17:39 <minus> unless you're already using redis for other things: use a time series DB
17:43 sknebel joined
17:44 <jdelStro1her> I'm already using redis for other stuff, and don't have a time series db handy
17:48 <minus> storing stuff in a hash seems to save a bunch of memory
17:48 <minus> you can just do a quick benchmark anyway
17:50 <Fweeb> So... I've got a simple queuing system set up and, with it, a simple messaging system for my workers to give some additional notifications via pubsub. Only I have this weird problem. Any listeners (in python2) I have on Windows machines seem to eventually stop seeing notifications from the channel they're subscribed to. Wouldn't mind a hint on how to sort that out...
17:51 <Fweeb> I'm suspecting a socket buffer overflow on the listeners, but the messages being sent are so small and so infrequent, I'm not sure how that might happen
17:52 <minus> jdelStro1her: in the end it depends on how you need to access the data
17:53 <minus> Fweeb: i'd check the connection status in something like process hacker
17:54 <minus> or take a look at the connection with wireshark
17:55 <Fweeb> minus: been monitoring with wireshark. A little tough to find correlations there. Nothing jumps out as being out of the ordinary. But then again, I don't frequently use wireshark
17:55 <minus> well, it should be obvious to see what happens on the connection; like windows replying with ICMP errors
17:56 <Fweeb> Just some resets, AFAICT
18:00 <minus> "just"
18:00 <minus> RST means bad
18:02 <Fweeb> In that case... I have a better idea of where to look now. :)
18:05 <minus> is the connection idle for a longer while?
18:05 <minus> at time
18:05 <minus> at times
18:08 atomi joined
18:09 svm_invictvs joined
18:11 wlightning-fuel joined
18:12 wlightning-fuel joined
18:21 <Fweeb> minus: It can be
18:23 <minus> is there some kind of NAT or so between client and server?
18:24 <Fweeb> The funny thing i that linux clients don't have this issue
18:27 hashpuppy joined
18:38 <Fweeb> s/funny/confusing
18:40 <minus> maybe different default socket timeouts
18:40 rendar joined
18:40 <minus> no idea if sockets have one by default
18:40 rendar joined
18:43 efphe joined
19:27 cyborg-one joined
19:35 winem_ joined
19:38 forgotmynick joined
19:53 <Fweeb> turns out... it wasn't related to socket buffers, but to the stdout buffer (the listener was launched via subprocess)
19:53 <minus> i.e. stdout=PIPE and that was never read from, filled up and locked the process?
20:00 Guest96_ joined
21:17 Azure joined
21:25 efphe_ joined
21:40 soveran joined
21:40 soveran joined
22:27 atomi joined
22:36 bannakaffalatta joined
22:53 atomi joined
23:15 fakenerd_ joined
23:19 maxmatteo joined
23:26 BrianMiller joined
23:38 minimalism joined
23:39 daxelrod joined
23:43 daxelrod1 joined
23:47 daxelrod joined
23:51 raspado joined