<    April 2017    >
Su Mo Tu We Th Fr Sa  
                   1  
 2  3  4  5  6  7  8  
 9 10 11 12 13 14 15  
16 17 18 19 20 21 22  
23 24 25 26 27 28 29  
30
00:11 evulish joined
00:11 hillct joined
00:18 gentunian joined
00:35 xaep joined
00:42 xaep joined
00:42 Necro|senseless joined
00:43 timg__ joined
00:52 Soopaman joined
00:54 Squiggs joined
00:55 Doow joined
00:56 <svm_invictvs> Hey
00:57 Gwayne joined
00:58 <svm_invictvs> So, this may be more appropriately aimed at Docker, but I'm following these instructions on this page: https://hub.docker.com/_/mongo/
00:58 <svm_invictvs> When I run mongo I can't connect to the instance running in the docker container. The page says it exposes the port, but I'm guessing I need to configure something in MongoDB to actually bridge hte port? Or am I missing something.
00:59 __MPZ joined
00:59 dustinm` joined
01:00 supershabam joined
01:01 SpeakerToMeat joined
01:01 _habnabit joined
01:02 jesk joined
01:08 artok joined
01:09 evulish joined
01:21 Soopaman joined
01:32 Wulf4 joined
01:35 harry joined
01:41 SpeakerToMeat joined
01:45 darkfrog joined
01:50 evulish joined
01:52 PerpetualWar joined
01:54 Siegfried joined
02:01 metasansana joined
02:08 artok joined
02:10 PerpetualWar joined
02:10 alexi5_ joined
02:22 evulish joined
02:22 darkfrog joined
02:40 evulish joined
03:00 mdorenka joined
03:22 evulish joined
03:31 Siegfried joined
03:32 raspado joined
03:40 Siegfried joined
03:48 gentunian joined
03:57 timg__ joined
04:12 evulish joined
04:13 artok joined
04:16 kyuwonchoi joined
04:17 kyuwonchoi joined
04:18 kyuwonchoi joined
04:19 kyuwonchoi joined
04:19 kyuwonchoi joined
04:20 kyuwonchoi joined
04:24 coudenysj joined
04:26 guybrush joined
04:28 <guybrush> hey there! i am saving dzi-files (deep-zoom-image format) in GridFSBucket's, now i wonder if i should rather create a bucket for every image (results in ~hundrets of smaller files) or just store everything in the same bucket. is there some downside to create a lot of buckets instead of having only a small number of them?
04:41 felixjet joined
04:44 Siegfried joined
04:44 ayogi joined
04:53 fracting joined
04:58 timg__ joined
05:08 senaps joined
05:09 senaps joined
05:16 senaps joined
05:16 senaps joined
05:21 kyuwonchoi joined
05:24 armyriad joined
05:28 Siegfried joined
05:41 lpin joined
05:43 Siegfried joined
05:50 rendar joined
05:55 coudenysj joined
05:59 timg__ joined
06:10 SkyRocknRoll joined
06:35 Necromantic joined
06:44 sparsh joined
06:54 jri joined
06:54 culthero joined
06:56 jesopo joined
06:59 timg__ joined
07:01 _jd joined
07:01 akagetsu01 joined
07:03 YoY joined
07:08 Bdragon joined
07:13 jesopo joined
07:23 jri_ joined
07:25 jri joined
07:26 evil_gordita joined
07:54 evulish joined
08:00 preludedrew joined
08:09 gfidente joined
08:09 gfidente joined
08:19 gfidente joined
08:19 gfidente joined
08:20 coudenysj joined
08:24 senaps joined
08:30 ams__ joined
08:34 gfidente joined
08:34 gfidente joined
08:42 jesk left
08:43 jesopo joined
08:55 okapi joined
08:58 yeitijem joined
08:59 intellix joined
09:02 timg__ joined
09:11 timg__ joined
09:27 waits_ joined
09:27 <waits_> hi
09:27 <waits_> How can I limit the results to a % of the total number of results?
09:27 <waits_> So I do some operations in aggregate, order them and want only % of them.
09:29 tibyke_ joined
09:30 tibyke_ left
09:30 tibyke joined
09:30 <tibyke> moin
09:31 <tibyke> trying to sort by a full textual representation of a date/time eg.: Mon Feb 10 2014 18:31:31 GMT+0100 (CET). is there an easy way on that or should i just add another field with the unix timestamp and sort by that?
09:34 brk_ joined
09:39 <Derick> tibyke: there is no way to do that - you need the timestamp
09:40 <tibyke> Derick, maybe i can ``project`` a field with some sort of conversion on version 3.4?
09:43 <Derick> no, that's not planned
09:43 <Derick> we're working on a few things like that, but still not with any "random" date/time string
09:43 <Derick> although we could extend that later
09:45 <tibyke> mkay, thanks for the information, i'll make an extra field thank.
09:45 <tibyke> then*
09:46 <waits_> am I forced to use two queries for this?
09:50 <Derick> waits_: for a % of the result, maybe...
09:50 <Derick> not sure whether you can limit by an expression
09:50 <waits_> I'd need to know the total count
09:50 <waits_> after the aggregation
09:51 <Derick> waits_: I don't think you need two queries, just consume as many results in your app as you need, and then close the cursor/iterator
09:52 <waits_> oh right
09:53 <Derick> you might want to set the batch size to something lower than normal for that though
09:53 culthero joined
09:54 senaps joined
10:05 <waits_> I will have to look into it, but thanks
10:06 <tibyke> Derick, sorry for being noob, but what is the function to convert "Mon Feb 10 2014 18:31:31 GMT+0100 (CET)" to unix timestamp? just cant find it.
10:10 senaps joined
10:10 <tibyke> getTime() :)
10:10 <Derick> in which language?
10:13 <tibyke> db.foobar.find().limit(3).forEach(function(doc) { doc.createdAtInt = new Date(doc.createdAt).getTime() / 1000; db.foobar.save(doc); });
10:13 <tibyke> this was it
10:17 <Derick> what sort of data are yo ustoring in createdAt ?
10:18 <Derick> just a unix timestamp I hope, and not a Mongo timestamp type?
10:18 <Derick> oh right - Date is a javascript type
10:18 <Derick> tibyke: you know that the _id's object ID also contains a timestamp, right?
10:19 <tibyke> Derick: yes, sure, but it has nothing to do with the date of object creation, rather its a metadata and has nothing to do with the creation of the row/object itself.
10:22 <Derick> OK
10:30 coudenysj joined
10:31 gfidente joined
10:33 coudenysj joined
10:33 gfidente joined
10:33 gfidente joined
10:46 coudenysj1 joined
10:48 techwave61 joined
10:48 lowbro joined
10:54 <tibyke> isnt really a better way of pagination (skip + limit) and getting number of found items than running 1 query without skip/limit to get the total and then run 1 query to get the items?
10:57 timg__ joined
11:14 kyuwonchoi joined
11:16 timg__ joined
11:29 senaps joined
11:35 senaps joined
11:35 gfidente joined
11:36 senaps joined
11:37 blizzow joined
11:59 intellix joined
12:07 synthmeat joined
12:12 intellix joined
12:13 rafael_sisweb joined
12:14 Bdragon joined
12:20 <rafael_sisweb> hey guys.
12:20 <rafael_sisweb> Is there a way to dump a crashed mongod 3.4 wiredtiger database?
12:20 <rafael_sisweb> I'm with one tha does not startup because an acidentally "local" directory was deleted.
12:20 <rafael_sisweb> mongod with --repair works verifying and reindex all collections until get local database than crash.
12:20 <rafael_sisweb> Seems all my relevant data is ok but how to get that?
12:21 <xrated> rafael_sisweb: you _should_ be able to go into the directory where the db files are and nuke the local.* files
12:21 <xrated> obvs back this all up first
12:22 <* xrated> would rsync the files to a temp box and nuke the local.* files there and try to stand the db up as a test
12:27 dump joined
12:33 <rafael_sisweb> i've got this error:
12:33 <rafael_sisweb> 2017-04-26T09:26:41.120-0300 I STORAGE [initandlisten] Repairing collection local.me
12:33 <rafael_sisweb> 2017-04-26T09:26:41.120-0300 I STORAGE [initandlisten] Verify failed on uri table:local/collection/0-1570274486529406448. Running a salvage operation.
12:33 <rafael_sisweb> 2017-04-26T09:26:41.156-0300 I - [initandlisten] Invariant failure rs.get() src/mongo/db/catalog/database.cpp 195
12:33 <rafael_sisweb> 2017-04-26T09:26:41.162-0300 I - [initandlisten]
12:33 <rafael_sisweb> ***aborting after invariant() failure
12:34 <rafael_sisweb> before repairDatabase local, all others databases was repairing with success
12:36 <rafael_sisweb> Is there any way to manualy recreate local database?
12:36 <rafael_sisweb> seems that this database was not cointaining anything relevant.
12:36 <rafael_sisweb> Im saying that because repair process was able to read all my databases, collections, index metadata and data.
12:47 <rafael_sisweb> @xrated we had "--directoryperdb --wiredTigerDirectoryForIndexes" parameters so all files was inside a directory named with the DbName. In this case the local directory does not exists into my dbpath
12:48 kyuwonchoi joined
12:48 darkfrog joined
12:51 darkfrog_ joined
12:53 harry joined
12:55 lowbro joined
12:55 lowbro joined
13:00 ramortegui joined
13:15 jr3 joined
13:20 kyuwonchoi joined
13:35 megamaced joined
13:45 Soopaman joined
13:47 q_q joined
14:00 geoffb joined
14:01 cr0mulent joined
14:03 <cr0mulent> I am trying to connect to a mongodb atlas cluster using Jupyter using pymongo 3.3. I am able to connect successfully using the mongo client but when I try to connect using jupyter I get a timeout error: ServerSelectionTimeoutError: '$err'
14:04 <cr0mulent> Is this related to SSL?
14:07 gentunian joined
14:21 kyuwonchoi joined
14:34 joannac joined
14:36 pxed joined
14:38 shackra joined
14:57 sz0 joined
15:06 skot joined
15:15 culthero joined
15:18 YoY joined
15:19 pxed joined
15:32 hillct joined
15:34 jellycode joined
15:42 orbyt_ joined
15:44 C48I52AG joined
15:46 jellycode_ joined
15:58 svm_invictvs joined
16:11 <rafael_sisweb> Is there a way to direct copy files to a new mongodb instalation?
16:13 <joshua> Something like this maybe https://docs.mongodb.com/manual/reference/command/copydb/#dbcmd.copydb
16:16 blizzow joined
16:17 <rafael_sisweb> But in this case im not able to put my database online.
16:17 <rafael_sisweb> Tried with --repair but local database directory was accidentaly deleted. Starting mongodb with --repair identify all databases, repair all collections and indexes. But at then end of process start to repair local database and so crash.
16:26 itaipu joined
16:29 <joshua> Might depend on how the other instance was set up. Which version of mongo, storage engine and config options like directoryPerDB. If repair isn't working. Some docs here https://docs.mongodb.com/v3.0/administration/backup/
16:29 <joshua> If the versions and config match up usually it just works unless something got corrupt
16:32 svm_invictvs joined
16:36 Siegfried joined
16:37 freddy__ joined
16:44 Siegfried joined
16:54 trevor joined
17:03 <rafael_sisweb> @joshua, I've trying this but no success to start my databse.
17:05 <gfidente> back later
17:21 byah joined
17:21 <byah> sup dudes,
17:23 <byah> i'm trying to keep my server.js file clean by keeping my db read function in a seperate file, how would i pass my connected db as a param?
17:23 <byah> to the db read module
17:26 jri joined
17:27 <byah> var dbConnect = MongoClient.connect(url, function(err, db){ if(err) throw err; console.log('db connected') });
17:27 <byah> https://pastebin.com/Xj3YqUYh would i be able to pass that as a param somehow?
17:28 silenced joined
17:29 jellycode_ joined
17:29 blizzow joined
17:32 moura joined
17:36 jellycode joined
17:36 gfidente joined
17:36 <jellycode> does anyone here use the C# driver that would be willing to try to help someone ?
17:45 freddy__ joined
17:46 jri joined
18:01 jellycode joined
18:10 Mantis_ joined
18:14 re1 joined
18:18 siruf joined
18:19 jellycode_ joined
18:21 Sasazuka joined
18:22 rendar joined
18:22 rendar joined
18:25 <jellycode_> I do not understand how the class map lambda works here: http://mongodb.github.io/mongo-csharp-driver/2.4/reference/bson/mapping/
18:26 <jellycode_> It looks simple and I understand the lambdas completely
18:28 <jellycode_> but we're sending one lambda to registerclassmap, and inside that, we're saying, when called, call the MapMember function with the lambda c.SomeProperty
18:28 <jellycode_> c => c.SomeProperty
18:28 <jellycode_> what is c?
18:30 <jellycode_> c is Class, ok, but what will this accomplish: c => c.SomeProperty
18:31 okapi joined
18:33 coudenysj joined
18:35 RickDeckard joined
18:53 gentunian joined
19:06 <rafael_sisweb> anyone here has some idea how to recover a database when "local" directory was deleted manualy?
19:06 <rafael_sisweb> repair working fine all others databases but when process try to repair local database file does not exists en process crashout.
19:11 okapi joined
19:47 RickDeckard joined
19:51 MuzlL0dr joined
19:52 gfidente joined
19:52 gfidente joined
19:55 <jellycode_> not i, sorry
19:56 jr3 joined
19:56 edrocks joined
19:56 Soopaman joined
20:04 mbwe joined
20:06 Sasazuka joined
20:09 <jellycode_> It it actually impossible to create a custom deserializer of a generic type? I can't see how to do it because in the examples I've found, properties are read 1 by 1, and theres a custom reader for every type... for example: long zebra_stripes = context.Reader.ReadInt64();
20:11 <jellycode_> So, if you want to write a Serializer for Result<T> that can handle Result<Apples> and Result<Banana> and they have different properties, you'd have to build in a switch/case and have custom logic for each one
20:12 <jellycode_> I just wanted to define the default serialization steps for my "Result" class. But it has a member T Value, so I just want the AutoMapper to map as normal. Is that possible?
20:19 <rafael_sisweb> is there any way to remove a database or collection information from metadata?
20:19 <rafael_sisweb> _mdb_catalog.wt and WiredTiger.wt
20:19 <rafael_sisweb> If it was possible i could remove local database from the startup check..
20:21 itaipu joined
20:23 pxed joined
20:25 <joshua> rafael_sisweb: I have no experience recovering wiredtiger stuff but it looks like there is a wt command that might let you do some manipulations http://www.alexbevi.com/blog/2016/02/10/recovering-a-wiredtiger-collection-from-a-corrupt-mongodb-installation/
20:31 pxed joined
20:38 Siegfried joined
20:49 jellycode joined
21:12 castlelore joined
21:15 rayn joined
21:15 <rafael_sisweb> thanks, i'm trying now with your tip.
21:21 Siegfried joined
21:22 <rafael_sisweb> if somebody here has more wiredtiger experience i'll appreciatte any help.
21:25 StephenLynx joined
21:42 Siegfried joined
21:44 Soopaman joined
21:45 jri joined
21:48 okapi joined
21:54 re1_ joined
21:56 re1_ joined
22:07 edrocks joined
22:08 einseenai joined
22:10 gfidente joined
22:14 ArchDebian joined
23:02 rayn joined
23:05 jellycode joined
23:24 blizzow joined
23:52 bytee joined
23:52 bytee joined