<    March 2017    >
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 28 29 30 31
00:00 Derperperd joined
00:07 Muchoz joined
00:10 gentunian joined
00:20 ghost1 joined
00:24 ghost1 joined
00:35 Derperperd joined
00:37 Sasazuka joined
00:55 Gwayne joined
00:55 Gwayne joined
01:01 Sasazuka_ joined
01:11 edrocks joined
01:13 Sasazuka joined
01:33 duckduckgo joined
01:36 techwave61 joined
01:37 Sasazuka_ joined
01:40 <duckduckgo> What is the right way to store an entire dataframe (~300kb) into a single Mongodb document? Is it a good idea?
01:40 ahr3n joined
01:44 Sasazuka joined
01:48 blizzow joined
01:51 Sasazuka_ joined
01:51 philipballew joined
01:52 gentunian joined
02:01 <duckduckgo> What is the right way to store an entire dataframe (~300kb) into a single Mongodb document? Is it a good idea?
02:12 blizzow joined
02:44 mdorenka joined
02:48 svm_invictvs joined
03:01 jeffreylevesque joined
03:02 jeffreylevesque left
03:16 my007ms left
03:19 blizzow joined
03:28 cybertoast joined
03:28 <duckduckgo> What is the right way to store an entire dataframe (~300kb) into a single Mongodb document? Is it a good idea?
03:32 ahr3n joined
03:40 ghost1 joined
03:52 Derperperd joined
03:55 BlueProtoman joined
03:56 <BlueProtoman> I'm interested in making a query that only returns one value per object. How can I just get this data back as a plain Array?
04:06 raspado joined
04:07 fullerja joined
04:14 Platyp joined
04:21 <Platyp> i have a collection with documents containing an array field where the elements of the array are of the form "Day-Hour:Customers", and i need to extract the customers for Fri-21. how would i go about doing this?
04:22 <Platyp> my goal is to get the documents with the lowest nonzero Fri-21 customer count
05:14 edrocks joined
05:20 ayogi joined
05:22 igniting joined
05:35 Guest65382 joined
05:37 sterns joined
05:42 auzty joined
06:01 Karbowiak| joined
06:07 fullerja joined
06:20 tildes joined
06:35 Guest65382 left
06:35 slackorama joined
06:44 severance joined
06:46 lpin joined
06:51 _jd joined
07:17 tildes joined
07:21 castlelore joined
07:38 HermanToothrot joined
07:59 Muchoz joined
08:08 fullerja joined
08:25 xcesariox joined
08:34 tildes joined
08:36 gfidente joined
08:39 ahr3n joined
08:47 AvianFlu joined
08:47 xcesariox joined
08:57 SkyRocknRoll joined
09:01 AvianFlu joined
09:18 edrocks joined
09:26 RxMcDonald joined
09:27 <RxMcDonald> Hello, is there a clever way to avoid more lines of code if the variable is undefined? i.e. $set: { var1: value, var2 is undefined so no, etc } ?
09:31 jwd joined
09:33 lpin joined
09:36 okapi joined
09:48 nalum joined
09:50 <nalum> hello all. I'm running mongodb on kubernetes and I've started getting timeout errors from the nodes. I'm not sure where to start looking, any pointers?
09:50 rendar joined
10:04 RxMcDonald left
10:04 severance left
10:08 fullerja joined
10:11 cev joined
10:12 ahr3n joined
10:29 <synthmeat> is it possible to instruct mongodb (or more likely the driver) to compress data _in transfer_ (don't care about storage)
10:29 <synthmeat> like, driver: "gimme entire collection but gzip it before you send it". mongod: "sure, here's gzipped json"
10:31 <Derick> synthmeat: we're looking at adding driver to server compression for the next release
10:32 <Derick> but mongod won't sell gzipped json... it would still just be bson as part of the wire protocol, but then compressed
10:32 Jimeno joined
10:32 <Derick> sell? no, send
10:32 <Jimeno> hey. When I'm trying to insert a document if it doesn't exist yet, how do I generate the _id of the object I want to check if exists or not? Thx
10:33 <Derick> Can you try to phrase that again? I am not quite understanding that
10:33 <Jimeno> I've a python object, let's call it City
10:33 <Jimeno> I want to insert it into the mongo db but I want to check it it already exists before inserting it
10:34 <Jimeno> To just update the fields that have changed in case that the object already existed in the db
10:34 <Derick> which field in City is unique?
10:34 <Jimeno> for example the name
10:35 <Derick> then you need to make the _id field the name of City
10:35 <Derick> you shouldn't need to have to check whether a document already exists, as that causes race conditions
10:35 <Jimeno> hm... how? isn't _id a hash generated by mongo's engine?
10:36 <Derick> no
10:36 <synthmeat> Derick: oh, ok. interesting and useful. thank you! i'll then skip ad-hoc implementation of mine for now. it'll cut, for my use case, bandwidth/latency almost 2 orders of magnitude then, if the compression ratio is close to what gzip does to json
10:37 <Derick> _id is auto generated by the driver for new documents, unless you have set it yourself
10:38 <Jimeno> so, if I've a City object with name and many other attributes, should I set the name as _id and then create a field called name with it?
10:39 <Derick> maybe? I don't know what "should I set the name as _id" means
10:39 <Derick> are you using some sort of ODM?
10:42 <synthmeat> Jimeno: i frequently use replaceOne with upsert to similar effect, if i already know for a fact that those would be fairly infrequent
10:44 <Jimeno> synthmeat: absolute noob with mongo, a link will be absolutely welcome
10:44 <Jimeno> Derick: reading a json file, creating City objects with contents and then inserting them into the DB
10:46 <Derick> if you want to have the City name unique, assign the _id field of the document that you're inserting to the value of the name from the JSON file then
10:48 <Jimeno> Derick: thx, will give it a try
10:49 <synthmeat> Jimeno: https://docs.mongodb.com/v3.2/reference/method/db.collection.replaceOne/
10:50 <Jimeno> synthmeat: thanks!
10:51 <synthmeat> Jimeno: basically, you tell it to find one according to some filter (probably _id in your case, which is city name), new document you will replace it with, and set upsert to true so it creates it if it doesn't find it
11:07 fullerja joined
11:18 michaeldgagnon joined
11:22 Derperperd joined
11:28 ahr3n joined
11:41 cev joined
11:50 techwave61 joined
11:59 yeitijem joined
12:18 ghost1 joined
12:20 jeffreylevesque joined
12:21 <jeffreylevesque> hey anyone here use puppet to install mongodb?
12:21 StephenLynx joined
12:27 <artok> jeffreylevesque: no but ansible playbooks on their way
12:27 <jeffreylevesque> playbooks?
12:28 <artok> yeah, ansible uses yml "playbooks" to do provisioning
12:28 <jeffreylevesque> ah, yeah i just semi google it
12:28 <jeffreylevesque> never used ansible before
12:29 <artok> best thing after sliced bread
12:30 <artok> as it has also modules for AWS, DigitalOcean etc
12:34 <jeffreylevesque> ansible has modules for aws?
12:34 <artok> yeah, I've got playbooks to start AWS instances and then provisions them
12:54 okapi joined
12:58 undertuga joined
12:58 cevelans joined
13:00 geoffb joined
13:07 RickDeckard joined
13:10 felixjet joined
13:35 edrocks joined
13:38 Derperperd joined
13:41 bethge joined
13:46 blizzow joined
13:58 jeffreylevesque joined
13:58 <coalado> can I project a query to return of fields exist?
13:59 <coalado> I do not want to return the value of fields, but a boolean if the field exists or not
14:04 jeffreylevesque_ joined
14:05 gentunian joined
14:11 michaeldgagnon joined
14:14 jeffreylevesque joined
14:16 ramortegui joined
14:17 harry joined
14:24 jeffreylevesque_ joined
14:33 akagetsu01 joined
14:33 Derperperd joined
14:34 cybertoast joined
14:34 jeffreylevesque joined
14:37 jr3 joined
14:44 jeffreylevesque_ joined
14:51 fullerja joined
14:55 jeffreylevesque joined
14:58 mbwe joined
15:01 nanohest joined
15:05 jeffreylevesque_ joined
15:15 jeffreylevesque joined
15:19 goldfish joined
15:21 Waheedi joined
15:23 edrocks joined
15:25 jeffreylevesque_ joined
15:29 kexmex joined
15:36 synchroack joined
15:42 jeffreylevesque_ joined
15:52 jeffreylevesque joined
15:53 Derperperd joined
15:57 jeffreylevesque_ joined
15:58 Bizkit joined
15:58 Derperperd joined
16:01 philipballew joined
16:02 Waheedi joined
16:08 jeffreylevesque joined
16:18 jeffreylevesque_ joined
16:22 ghost1 joined
16:23 Danielv123 joined
16:24 edrocks joined
16:26 igniting joined
16:28 Bizkit joined
16:28 jeffreylevesque joined
16:30 <Danielv123> I have a database with items. They are {ID:"string", amount:36083} + some other data I am interested in reading. I want to get all entries with a specific ID and amount is in a selected range but I have been unable to get the selector to work. Can anyone help me? http://pastebin.com/bR3pn7zr
16:32 kexmex joined
16:34 Mmike joined
16:34 <Mmike> Hi, lads. How would one check if replicaset communcation is taking place over ssl?
16:36 <Bizkit> hey i am currently trying to find the best way to store json data in a mongodb document via php, but can't find any resources on what the best format would be, i don't need to iterate or search the json data while it's in mongodb, i tried to convert it to bson with MongoDB/BSON/fromJSON but that throws an exception (i checked my json data and it's valid) so should i store it as a json string or key value pairs or is there a way
16:36 <Bizkit> to make the conversion to bson work?
16:37 synchroack joined
16:38 jeffreylevesque_ joined
16:43 Derperperd joined
16:45 Pengi_ joined
16:46 Pengi_ left
16:46 Pengi__ joined
16:47 lpin joined
16:47 Jeroen_ joined
16:48 jeffreylevesque joined
16:49 Jeroen_ left
16:49 Jeroen_ joined
16:49 jrmg joined
16:50 Jeroen__ joined
16:50 Jeroen_ left
16:51 Jeroen_ joined
16:53 Jeroen__ joined
16:55 <Jeroen__> HI there!
16:55 <Jeroen__> I have a question about protecting my MongoDB installation.
16:56 <Jeroen__> I will be running multiple Raspberry Pi units that will send and receive data to my database.
16:56 <Jeroen__> My mongodb installation is running on a linux Ubuntu 16 installation.
16:57 <Jeroen__> However, to make sure the incomming connections are from computers I actually trust, I can use IP filtering/whitelisting in order to kick out any connections that are from an unknown IP address.
16:57 <Jeroen__> However, because the building I’m making this for uses dynamic IP addressing (DHCP), I can’t rely on IP addresses. Therefor, I would like to filter by MAC address.
16:58 <Jeroen__> Does anyone know how I can reliably filter on MAC addresses in this manner?
16:58 <artok> or use authentication ?
16:58 <Jeroen__> I’m already using a log-in system to make sure I know what rights every one.
16:59 <Jeroen__> Right, I’m doing that, but the problem is that the source on the Raspberry Pi will be open, so if anyone can get the source, they also know the log-in detauls.
16:59 <artok> ok
16:59 <Jeroen__> to make sure I know what rights every one has*. (Managers, have R/U, admins have R/W, RPIs have R/W, etc.)
17:00 <Jeroen__> So I would like to check incomming connections for log-in details to check for permissions, and MAC addresses for connecting at all.
17:01 <artok> and you aren't able to add dhcp table that every client would get always same ip?
17:01 cime joined
17:02 <Jeroen_> That is outside my permissions I believe.
17:02 <Jeroen_> I’m making this for my school, together with some other teammates.
17:02 <Jeroen_> However, since this is starting to look like a serious project, I may be able to make that happen.
17:02 <Jeroen_> But MAC addresses would be more reliable. That’s why.
17:02 <Jeroen_> robom
17:03 <artok> ok
17:03 <Jeroen_> oops, ‘robom’ was a mistake
17:04 s2013 joined
17:06 <artok> there is still possibility to fake mac addresses
17:11 BlueProtoman joined
17:20 philipballew joined
17:23 Derperperd joined
17:27 Jeroen_ joined
17:30 <Danielv123> Jeroen_, mac addresses can be faked. If you want security, just use a non hardcoded password.
17:31 blizzow joined
17:35 svm_invictvs joined
17:37 Valery joined
17:41 Jeroen_ joined
17:43 raspado joined
17:44 silenced joined
17:45 Jeroen_ joined
17:46 <Jeroen_> @Danielv123 I understand, but I was just wondering. I’m planning to utilize other security measures as well, but I just wanted to know if MAC address whitelisting was possible.
17:54 jrmg joined
17:54 tildes_ joined
17:56 jrmg joined
17:56 Jeroen_ joined
18:06 fullerja joined
18:09 Derperperd joined
18:09 orbyt_ joined
18:15 point joined
18:26 artok joined
18:29 edrocks joined
18:30 dump joined
18:34 castlelore joined
18:37 silenced joined
18:44 gentunian joined
18:46 fullerja joined
18:47 edrocks joined
19:09 jrmg joined
19:13 Sasazuka joined
19:28 Sasazuka joined
19:47 sunoano joined
19:50 <Sasazuka> :q
19:50 <Sasazuka> sorry wrong window
20:29 nicole_pygirls joined
20:31 <nicole_pygirls> Morning
20:31 <nicole_pygirls> derick
20:31 <nicole_pygirls> im still fighting with that , now using a new scheme
20:32 itaipu joined
20:34 <nicole_pygirls> Who can help me to aggregate this ?
20:34 <nicole_pygirls> https://paste.ubuntu.com/24147839/
20:35 kexmex joined
20:57 Sasazuka_ joined
20:57 Lujeni joined
21:06 ahr3n joined
21:19 DyanneNova joined
21:25 jrmg joined
21:29 StephenLynx joined
21:33 Jeroen_ joined
21:38 tildes_ joined
21:39 nanohest joined
21:56 s2013 joined
22:07 Derperperd joined
22:09 ahr3n joined
22:10 okapi joined
22:13 beauvolio joined
22:26 jrmg1 joined
22:43 realisation joined
22:50 <nicole_pygirls> hi
22:50 <nicole_pygirls> anyone online?
22:51 <nicole_pygirls> this is a botnet channel?
22:51 <nicole_pygirls> :p
22:52 farchanjo_ joined
22:56 <artok> about
22:57 <nicole_pygirls> about
22:57 <nicole_pygirls> lol
22:57 <nicole_pygirls> do you know mongodb?
22:57 <nicole_pygirls> i need to sort this
22:57 <nicole_pygirls> https://paste.ubuntu.com/24147839/
22:57 <nicole_pygirls> aggregate
22:57 <nicole_pygirls> for each hour
22:58 <nicole_pygirls> $stats.2017.3.9.0.sum for example
23:01 jrmg1 joined
23:12 jeffreylevesque joined
23:13 <jeffreylevesque> is anyone here?
23:27 Sasazuka_ joined