<    March 2017    >
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 28 29 30 31
00:38 sz0 joined
00:47 philipballew joined
01:00 point_ joined
01:05 michaeldgagnon joined
01:05 philipballew joined
01:15 gentunian joined
01:44 DyanneNova joined
01:52 hfp_work joined
01:52 itaipu joined
01:53 philipballew joined
02:11 DyanneNova joined
02:32 Necromantic joined
03:26 SkyRocknRoll joined
03:28 ra4king left
03:34 DyanneNova joined
03:42 DyanneNova joined
03:58 armyriad joined
04:13 svm_invictvs joined
04:26 lessthan_jake joined
04:30 ayogi joined
04:51 fullerja joined
05:19 pzp joined
05:26 jaequery joined
05:28 fels joined
05:30 evil_gordita joined
05:40 preludedrew joined
06:22 svm_invictvs joined
06:26 lpin joined
06:39 jkhl joined
06:47 ayogi joined
07:27 igniting joined
07:28 fels joined
07:32 akagetsu01 joined
07:40 HermanToothrot joined
07:40 fels joined
07:55 nanohest joined
08:00 samwierema joined
08:00 yeitijem joined
08:10 jri joined
08:10 jri_ joined
08:11 fels joined
08:11 gfidente joined
08:12 rendar joined
08:12 jri_ joined
08:14 rodmar__ joined
08:14 jri_ joined
08:19 fullerja joined
08:33 Lujeni joined
08:55 Anto|ne joined
08:56 Tantamounter joined
08:57 [SySteM] joined
08:57 [SySteM] joined
08:57 <[SySteM]> Hello
08:58 <[SySteM]> Please, search some help with aggragate utilisation
08:59 <[SySteM]> I start an aggragate query on 4,5 million record
08:59 samwierema joined
08:59 <[SySteM]> How can i track progress ? (disk, etc.. ) ?
09:39 <KekSi> [SySteM]: check into profiling
09:39 <KekSi> https://docs.mongodb.com/manual/tutorial/manage-the-database-profiler/
09:39 <[SySteM]> Thanks
09:39 <KekSi> if db.currentOp() isn't good enough
09:40 <KekSi> see https://docs.mongodb.com/manual/reference/method/db.currentOp/
09:40 <KekSi> aswell
09:41 ayogi joined
09:47 JStoker joined
09:48 SkyRocknRoll joined
10:03 fullerja joined
10:11 thapakazi joined
10:11 synchroack joined
10:11 Mantis_ joined
10:13 undertuga joined
10:15 Mantis_ joined
10:16 <thapakazi> hey there, is there way i could update the db.hostInfo().system.hostname value, I have changed the system hostname but it not being reflected or db.shostInfo() is giving me the stale info. I guess 2nd is true
10:16 <thapakazi> Also do i need to reboot the mongo server, to get the right value ?
10:34 kexmex joined
11:12 <compeman> hi all
11:12 <compeman> is there anyone who can help me on aggregate method?
11:14 <compeman> i have a collection called 'contracts', and each contract document has a json object (contains companyName and taxNumber). i want to get contracts with the same taxNumber. thank you all
11:15 <compeman> _id: blabla, contractSerialNumber: blabla, company: { name: 'myCompany', taxNumber: 12345}
11:19 itaipu joined
11:29 jri joined
11:47 dunk joined
11:48 <dunk> I've got a mongo install on a virtualbox vm and have run out of space
11:48 <dunk> It's just a development machine so I can happily nuke all the data
11:48 <dunk> However, because there is no space left on the disk I can't start the mongo interpreter
11:49 <dunk> How can I safely nuke all the data?
11:50 <Doow> dunk, if the server is running maybe you can connect "remotely"?
11:50 <dunk> no, the daemon crashed
11:57 itaipu_ joined
12:03 nanohest joined
12:08 RickDeckard joined
12:12 StephenLynx joined
12:12 Mantis_ joined
12:19 harry1 joined
12:21 <dunk> Is there a clean way to nuke all the database data in Mongo?
12:30 jri joined
12:34 j0hnsm1th joined
12:35 jri joined
12:40 gentunian joined
12:41 samwierema joined
12:47 <dunk> :-(
12:49 dunk left
12:54 myu joined
12:55 itaipu joined
12:57 michaeldgagnon joined
13:00 geoffb joined
13:00 nanohest joined
13:01 HermanToothrot joined
13:03 ramortegui joined
13:20 <Tantamounter> What is the best way to configure Mongo for development so it doesn't consume all the memory and die over and over?
13:24 jr3 joined
13:36 re1 joined
13:48 vikneshwar1 joined
13:50 vikneshwar1 left
13:52 itaipu joined
13:57 fullerja joined
14:13 kexmex joined
14:23 Derperperd joined
14:30 lessthan_jake joined
14:32 DyanneNova joined
14:32 gentunian joined
14:39 orbyt_ joined
14:43 jri joined
14:43 philipballew joined
14:44 freeport joined
14:52 itaipu joined
15:00 samwiere_ joined
15:08 jr3 joined
15:09 gentunian joined
15:09 itaipu joined
15:10 jri joined
15:17 lessthan_jake joined
15:21 ssarah joined
15:22 beauvolio joined
15:28 ssarah joined
15:31 gitgud joined
15:33 _jd joined
15:38 jr3_ joined
15:39 Letze joined
15:40 jeffreylevesque joined
15:41 Paleo joined
15:47 _ramo joined
15:48 _ramok joined
15:48 <_ramok> hi
15:48 <_ramok> i've used mongoexport on a database that had 0,079GB of data, after re-importing it to another machine i see the dbs are only 0,059GB, what happened with the delta?
15:49 <_ramok> using mongo 3.2.12
15:49 <_ramok> okay, on the source machine where i did the export i'm running: MongoDB shell version: 3.2.11
15:51 <_ramok> sorry, i was talking bullshit: exporting happened with: mongodump -d quickstart -o /root/ and importing happened with: mongorestore -d quickstart /root/quickstart/quickstart/
15:52 <gitgud> lol
15:52 <gitgud> "talking bullshit"
15:52 <gitgud> ahahha
15:52 <ams__> _ramok: I've seen this happen, my assumption is that there was some unnecessary stuff in the source that doesn't get re-imported
15:52 <gitgud> i think indexes get left behind but im not sure
15:52 <ams__> They shouldn't have done
15:53 <ams__> But you should be able to compare?
15:54 <_ramok> ams__: compare what exactly?
15:54 <_ramok> i'm unsure how and what to compare
15:54 <ams__> Oh sorry, I thought you were saying indexes were missing
15:55 <gitgud> ams__, he wasn't saying that. i was saying it
15:55 <ams__> Yes
15:55 <gitgud> i think indexes get left behind when u do a mongodumb
15:55 <gitgud> mongodump*
15:55 <gitgud> one second
15:55 <ams__> So last time I saw this (gigabytes difference) I compared collection counts and they matched up. This isn't everything obviously, but was enough for me.
15:56 <gitgud> http://stackoverflow.com/questions/36854566/why-mongodump-does-not-backup-indexes
15:56 <_ramok> gitgud: is there a way to also export indices and re-import them?
15:57 <gitgud> indexes *do* get left behind
15:57 <gitgud> _ramok, im not sure. in my program my program knows how to build the indexes at startup. actually my software does the building index part. so if i just restart my app my app can remake the indexes as needed
15:58 <_ramok> okay
15:58 <_ramok> i'll try to simply copy the files
15:58 <_ramok> probably that won't work
15:59 <gitgud> i think for long term you should keep a file that indicates how to build all those indexes. then use a program to run those commands one by one easily
15:59 <gitgud> mongodump is supposed to be a backup tool. its not built to be a redundancy tool
16:00 <gitgud> meaning you shouldn't be using mongodump to deploy clusters anyway. so thats probably why they designed mongodump to not include indexes
16:00 <gitgud> if you want to make clusters and copy over indexes too, look at replica sets and how those work
16:01 <gitgud> just my 2 cents
16:05 Guest9835 left
16:06 <jeffreylevesque> i have 3 replica sets of 3 vm's each
16:07 <jeffreylevesque> i was able to `mongo --host xxx.xxx.xxx.xxx --port 27017` into another machine
16:07 <jeffreylevesque> but, once i connected, it became not accessible to another machine via `mongo --host`
16:07 <jeffreylevesque> is that normal?
16:09 myu joined
16:12 raspado joined
16:15 coudenysj joined
16:15 michaeldgagnon joined
16:20 itaipu_ joined
16:25 Sircle joined
16:25 <Sircle> Scenario: Want to get data for dashboard charts that needs aggrgations/counts etc of a collection in mongodb. Either the same collection has to be queried several times to get results (as its json document and not a sql table/join thing) OR get the collection data once from mongo and iterate once in node.js with conditions (i.e do computation on node.js). Which one is proefred?
16:26 svm_invictvs joined
16:27 gentunian joined
16:32 itaipu joined
16:34 soosfarm joined
16:34 myu joined
16:41 lessthan_jake joined
16:46 coudenysj1 joined
16:50 Derperperd joined
16:50 itaipu_ joined
17:00 lessthan_jake joined
17:11 ironpig joined
17:11 synchroack_ joined
17:15 dump joined
17:18 point joined
17:20 ayogi joined
17:24 Mantis_ joined
17:24 synchroack joined
17:28 synchroack joined
17:29 igniting joined
17:32 synchroack joined
17:44 castlelore joined
17:44 castlelore joined
17:44 itaipu joined
17:44 Necromantic joined
17:50 itaipu_ joined
18:04 DyanneNova joined
18:12 Sasazuka joined
18:13 InfoTest joined
18:14 Necro|senseless joined
18:23 jimeno joined
18:24 <jimeno> Hey! What's the best way to partially update an object in mongo? I mean, if the object already exist, just update the fields that changed. I've been searching for quite a while and I didn't find anything clear
18:24 <gitgud> jimeno, partially update ? what?
18:24 <gitgud> it depends on what type of update it is
18:25 <gitgud> theres $inc for increments, $set for set and $push, $pull for pushing and pulling to embedded arrays :P
18:25 <gitgud> depending on that you choose your use case
18:25 <gitgud> i think $inc $push $pull works best to avoid race conditions. to use these most if you can
18:25 <gitgud> use $set if absolutely necessary
18:28 <jimeno> gitgud: I do not want to replace the whole object, I just want to update the fields which contents changed. Is there an mongo-automated way by design?
18:28 <gitgud> jimeno, yeah dude. its $set in the update feature
18:28 <jimeno> I've seen somo snippets, but they replace the whole object (or document entry)
18:28 <gitgud> https://docs.mongodb.com/manual/reference/operator/update/set/
18:29 <gitgud> i use $set, it doesnt replace the whole doc. just the parts that i want to change :P
18:29 <gitgud> look at the examples
18:29 <jimeno> gitgud: that's what I've been looking for. Will give it a try. Thank you!
18:29 <gitgud> np :P)
18:29 <gitgud> :)*
18:30 Necromantic joined
18:33 jr3 joined
18:37 lessthan_jake joined
18:47 <jimeno> gitgud: worked like a charm! kudos to you!
18:48 <gitgud> :)
18:48 dfdf joined
19:01 kexmex joined
19:06 <Sircle> Want to get data for dashboard charts that needs aggrgations/counts etc of a collection in mongodb. Either the same collection has to be queried several times to get
19:06 <Sircle> results (as its json document and not a sql table/join thing) OR get the collection data once from mongo and iterate once in node.js with conditions (i.e do computation on
19:06 <Sircle> node.js). Which one is proefred
19:07 <gitgud> computation in mongodb is preferred
19:07 <gitgud> performance wise
19:07 <gitgud> even better would be if you do this computation and save it in mongo somewhere. ya know, pre-aggregate
19:07 <gitgud> that helps most
19:07 <gitgud> as new data comes in, you just update the data
19:07 <gitgud> when users want to pull it, you just give that result to them. rather than redo computation each time in mongo
19:08 <gitgud> thats sorta the formula, see if it fits your use case :)
19:09 <gitgud> Sircle, ^
19:17 <jimeno> any pymongo guide on how to do pre-made queries with some user-defined values?
19:18 <gitgud> idk about that. why is mongo shell not appropriate?
19:18 <gitgud> you can write little python scripts in a text file using pymongo and then run them and test on your test database
19:18 <gitgud> shouldn't be a big deal
19:19 <gitgud> i did the same with node at times
19:19 <gitgud> :)
19:19 Necro|senseless joined
19:20 <jimeno> gitgud: or just mongo commands
19:20 <jimeno> the one I'm thinking about is to build the query with string.format and then execute it, but... no-sql injection will be present :(
19:23 <gitgud> jimeno, my idea is anything you can do in your actual production app. you should be able to do in little scripts provided you import the same libs. if you want to test/practice your mongo things in python. then you can write small script in a file and then run those to get a better feel of how the api works with mongo
19:24 <gitgud> now exactly how you would do that i dunno. but its just a general idea. i dont know any python. but i work with node and it works similarly across langauges
19:24 <Sircle> gitgud: hm
19:25 <gitgud> Sircle, right so mongo beats node performance wise. so if you can shove your aggregation logic into mongo with aggregation framework that would be way more performant. but if you can pre aggregate your aggregation logic and store it as a variable in mongo its going to be even faster. thats the gist of what i was trying to say
19:26 <gitgud> and nowadays storage is cheap so pre-aggregation is not going to be that costly
19:26 <Sircle> gitgud: I agree but if the usecase does not fits, I am worried about 12 fetches in same collection to aggregate for diffferent fields in json doc VS 1 fetch from same collection and 1 loop in node.js server with 12 if/else conditions.
19:26 <gitgud> Sircle, depends on the size of the loop in node then. is this loop going to be 20 objects? 100 objects? 1000? million?
19:27 <Sircle> gitgud: it can be a million records
19:27 <gitgud> if its lower like 20, no biggie. if you reach a million, and since node is single threaded. then its going to be devastating for the event loop
19:27 <gitgud> yes then thats bad
19:27 <Sircle> git ok, then mongo is the option. the only option I guess?
19:27 <Sircle> definitly not node. Not even in cluster node
19:28 <gitgud> because node handles all events in the same thread. so if a few customers do a call for a iteration thru a million size loop, thats enough of a performance hit that all other new people's http, socket, all calls get blocked while node churns thru that size call. very bad!
19:28 <Sircle> hm
19:28 <Sircle> k
19:29 <Sircle> gitgud: I heard a lot of critism on mongo. Is it true that its discouraged these days?
19:29 <gitgud> Sircle, yes so no to the node. and yes to mongo. use either aggregation framework in mongo which is really good! or pre-aggregate with mongo which is more performant but will take up more of a storage
19:29 <gitgud> Sircle, well it depends on the criticism xD
19:29 <gitgud> there are certain use cases mongo will not be good with. for what it is designed for, mongo will be very very good
19:29 <gitgud> so you have to make sure mongo fits your use case
19:31 <Sircle> 1) need three kinds of data on a 1 million document collection. field_abc =3, field_abc > 3, feild_abc <3. So three sets of data. I would need 3 fetches/queries/reads 2) http://cryto.net/~joepie91/blog/2015/07/19/why-you-should-never-ever-ever-use-mongodb/ http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
19:31 <gitgud> you seem to think more fetches is bad. its not that bad :P
19:32 <gitgud> i have seen that link get passed around before
19:32 <gitgud> in my experiences i never had that kind of problem
19:32 <gitgud> "loses data" ? lmao
19:32 <gitgud> give me a break
19:32 <gitgud> http://stackoverflow.com/questions/10560834/to-what-extent-are-lost-data-criticisms-still-valid-of-mongodb
19:32 <gitgud> some of those claims have been debunked. look at its answer
19:33 <gitgud> those criticisms may be valid for old mongo. but new mongo handles them fine and im not seeing a surge of new apps complaining about mongo being a bad thing
19:33 <Sircle> hm
19:33 <Sircle> gitgud: thanks! you have been a great helpt!
19:33 <Sircle> help*
19:33 <gitgud> people who are coming new to mongo are worried if mongo is faulty. long term mongo users arent having complaints :P. so its safe to say that those problems have been addressed
19:34 <gitgud> now lately there has been an issue with how mongo binds to all interfaces
19:34 <gitgud> and hackers asking for ransom
19:34 <Sircle> hm
19:34 <gitgud> to that all i can say is, go and llearn how to either bind mongo to local interface. or learn iptables :P
19:34 <gitgud> just to warn you beforehand :D
19:34 <gitgud> yeah no worries. anytime
19:34 <Sircle> gitgud: for what use cases is mongo good and for what is others? (like for json docs)
19:36 <gitgud> mongo good for json-like data structures. meaning for new startups mongo is good. why? because new startups generally dont have an idea of how they will grow up and scale. they cant say what new data is going to be added to it. a lot of what im about to say might be advanced so i'll try to explain as best i can. for example say you keep track of humans in your app. then one day your company decides to keep track of dogs or cats. new fields are
19:36 <gitgud> added. mongo lets this data to be added easily, as in, its going to be flexible because of the nature of how json works
19:37 <Sircle> but couchdb does the same
19:37 <gitgud> if couchdb handles your case better then go for it dude
19:37 <Sircle> I already use mongo. Just comparing.
19:38 <gitgud> btw im not endorsed or paid by mongo in anyway so sorry if i sound like im tooting its horn too much :P
19:38 <gitgud> im just telling u the benefits i get from working at this startup with mongo
19:38 <Sircle> no problem. You are very informative :)
19:38 <gitgud> how it made my life easier
19:38 <Sircle> k
19:38 <gitgud> you can use same collection to keep track of different types of data, but use same indexes to make them more efficient. if you dont like that, you can take out data. you can use partial indexes to keep index size small and use those to do quicker queries
19:39 <gitgud> also you can query inside json embedded docs. and docs that are inside those embedded docs
19:39 <gitgud> stuff like that makes my life easier because thats the kind of app im working on
19:39 <gitgud> now having said that, if your data is purely transactional and 1 level deep or something
19:39 <gitgud> then mongo may not be necessary, you can use sql or something :P
19:40 blizzow joined
19:40 <Sircle> hm
19:40 <gitgud> i think the other db like this that compares with mongo is postgresql
19:40 <Sircle> k
19:40 <gitgud> and pgsql actually beats mongo by benchmark numbers
19:41 <gitgud> but sharding on pgsql is apparently hard (or so ive heard)
19:41 <Sircle> yes
19:42 <gitgud> also since i work very much with gps mongo helps me. but again i dont know if theres some obscure db out there that does this better
19:42 <gitgud> like maybe couchdb does this better but im not sure
19:42 <gitgud> you will have to research that bit yourself :)
19:43 <Sircle> problem with rdbms is when you scale, you have to shard. after a limit, you have to break relations and that results in rdbms being nosql like
19:43 <gitgud> haha
19:43 <gitgud> yeah i suppose
19:43 lessthan_jake joined
19:43 <Sircle> if you do not break, you cannot have master / slave more than 16 or so
19:44 <Sircle> it will effect availability
19:44 <gitgud> but go tell that to the sql users right now who have spent their lives mastering sql. they will give you a hard time and call this nosql tech "hipster nonsense"
19:44 <Sircle> hm
19:44 <gitgud> im just saying there is a good and bad side to everything
19:44 <gitgud> my program needs mongo or something like couch or postgresql more than it needs something like mysql
19:45 <Sircle> hm
19:45 <gitgud> im kind of in deep with mongo as well so there has to be a huge flaw in mongo for me to want to switch to something else. but so far things are good
19:46 <Sircle> I see. thanks! will be in touch.! got to go now.
19:46 <gitgud> take care and good luck man
19:46 <Sircle> thanks :)
19:46 <Sircle> you too
19:57 lessthan_jake joined
20:01 synchroack joined
20:02 lessthan_jake joined
20:03 synchroack joined
20:05 synchroack joined
20:19 synchroack joined
20:21 lessthan_jake joined
20:30 philipballew joined
20:41 realisation joined
20:43 rendar joined
20:43 rendar joined
20:48 Derperperd joined
20:53 DyanneNova joined
21:01 beauvolio joined
21:04 jeffreylevesque joined
21:05 blizzow joined
21:06 blizzow joined
21:12 Muchoz joined
21:20 artok joined
21:26 blizzow joined
21:35 lessthan_jake joined
21:46 Huck_Fumble joined
21:47 <Huck_Fumble> hello guys
21:49 <Huck_Fumble> pentesting an old mongodb server and having some issues. there's a box with unsanitized input which would allow me to inject directly into that box, or to escape the code block associated with that where function and inject from scratch
21:50 Muchoz_ joined
21:50 <Huck_Fumble> the vuln function is structured is a {'$where': "inputa. == ''"}
21:52 <Huck_Fumble> wondering if its best to work within the function or if its smarter to escape out of it with '; foobar ; var foo =bar'
21:53 <Huck_Fumble> or if i should work within that provided box without escaping. having trouble with both methods but escaping out seems to provide initial success with failure down the road which makes me wonder if me escaping the code block is the issue
22:20 caley joined
22:21 caley left
22:25 gentunian joined
22:38 caley joined
22:39 <caley> Greetings- does anyone here have experience propping up mongodb in rhel 7.x ? Iā€™m sharding a cluster in AWS and am running in to SELinux issues left and right any direction would be appreciated
22:51 synaptech- left
22:53 Muchoz joined
23:01 philipballew joined
23:40 DyanneNova joined