<    March 2017    >
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 28 29 30 31
00:05 Soopaman joined
00:25 culthero joined
00:27 <puppyMonkey> hello, i am new to mongodb/ document stores in general. i am trying to get all channels created by specific user; in doing so and designing the document store, i de-normalizing the schemas but having user and a channel separately and referencing the documents by their _id. i am using the Mongoose and bluebird to acheive this. This is how i have tried to implement it :https://hastebin.com/eligugiqol.js
00:28 <puppyMonkey> i am getting the user object and also the channel object, i just want the channels created by the user.
00:44 philipballew joined
00:51 kyuwonchoi joined
00:52 kyuwonchoi joined
00:53 kyuwonchoi joined
00:53 Gwayne joined
00:54 kyuwonchoi joined
00:55 kyuwonchoi joined
00:59 kyuwonchoi joined
01:04 orbyt_ joined
01:25 re1 joined
01:32 lessthan_jake joined
01:47 puppyMonkey joined
02:05 pzp joined
02:25 orbyt_ joined
02:30 MacWinner joined
04:11 preludedrew joined
04:29 ayogi joined
05:24 philipballew joined
05:43 frenchie joined
05:44 lpin joined
05:51 frenchie left
05:53 frenc joined
05:55 igniting joined
06:03 jwd joined
06:07 <frenc> Is anyone online familiar with using Oplog to maintain real-time(ish) state with another db? Keen for some advice before I dive in too much further
06:13 <KekSi> what do you mean? are you trying to write your own replication?
06:14 <KekSi> or are you trying to ship the oplog and replicate mongo into some other kind of database?
06:15 <KekSi> oplog shipping is not much different from WAL (write-ahead log) shipping known from other databases (postgresql, redis, ...)
06:20 <frenc> 2nd option - capture oplog events to replicate to Redshift
06:42 coudenysj joined
07:15 <KekSi> i have a question.. i have a replicaset that has 2 members - a primary (priority: 1) and a secondary (priority: 0, hidden: true)
07:16 <KekSi> the primary should never lose its master status
07:16 <KekSi> instead o isee in the log "assertion 13435 not master slaveOk=false ...
07:17 <KekSi> how can that happen?!
07:19 <KekSi> there's no other nodes that are eligible to become primary so why does it step down?
07:28 Folkol joined
07:29 Lujeni joined
07:39 rendar joined
07:42 sQVe joined
07:43 ssarah joined
07:55 Folkol joined
07:59 Mmike joined
08:09 <Bodenhaltung> Hmm: "/opt/mongo-php-driver/src/libmongoc/src/mongoc/mongoc-config.h:156:30: Fehler: operator '!=' has no left operand" and in line 167
08:16 gfidente joined
08:16 gfidente joined
08:22 sQVe joined
08:28 ssarah joined
08:35 ssarah joined
08:50 ssarah joined
08:56 <KekSi> i'm still hoping someone can tell me why it lost its primary status and how i can avoid it
09:18 ssarah joined
09:21 kexmex joined
09:24 ssarah joined
09:35 jn joined
09:47 castlelore joined
10:13 sQVe joined
10:16 Folkol joined
10:18 castlelore joined
10:20 castlelore joined
10:43 lpin joined
10:45 Champi joined
10:50 arti joined
11:06 Folkol joined
11:18 kexmex joined
11:42 goldfish joined
11:53 StephenLynx joined
12:07 saket joined
12:11 sQVe joined
12:22 waits_ joined
12:22 <waits_> hi
12:33 culthero joined
12:44 DYnamo_ joined
12:56 <KekSi> so idea for my problem earlier: if i deploy an arbiter along the primary
12:57 <KekSi> that primary should stay primary and never lose that status even when it loses connection to the secondary (which is set to priority: 0 and hidden: true))
13:00 geoffb joined
13:01 sterns joined
13:02 ramortegui joined
13:03 <waits_> Hi I'm trying to query a list of posts in my object which has this structure: { _id: <>, posts: [{ postId: <>, <other props> }]. The query is this (but it doesn't work): db.col.aggregate( [ { $match: { "posts.postId": { $gt: "3" } } }, { $unwind: "$posts" }, { $group: { _id: "$_id", posts: { $push: "$posts" } } } ] );
13:04 <waits_> Can anyone tell me why it doesn't work?
13:06 Lujeni joined
13:13 yeitijem joined
13:27 itaipu joined
13:36 culthero joined
13:41 armyriad joined
13:50 armyriad joined
14:10 freeport joined
14:15 itaipu joined
14:26 itaipu joined
14:31 klics joined
14:31 lessthan_jake joined
14:40 sushigun joined
14:40 KamiRath joined
14:41 re1 joined
14:52 synchroack joined
14:53 AvianFlu joined
14:54 soosfarm joined
14:58 lounge-user62 joined
15:04 Soopaman joined
15:21 aps joined
15:30 igniting joined
15:36 caliculk joined
15:41 orbyt_ joined
15:42 gregor3005 joined
15:51 shayla joined
15:54 sz0 joined
16:00 caliculk joined
16:02 circ-user-eRTzI joined
16:03 circ-user-eRTzI left
16:03 edrocks joined
16:04 gentunian joined
16:10 itaipu joined
16:13 re1 joined
16:16 SkyRocknRoll joined
16:32 artok joined
16:46 okapi joined
16:48 raspado joined
16:53 svm_invictvs joined
17:06 edrocks joined
17:10 philipballew joined
17:18 point joined
17:29 itaipu joined
17:29 dino82 joined
17:33 point_ joined
17:33 ramortegui joined
17:46 pxed joined
17:49 jeffreylevesque joined
17:50 <jeffreylevesque> do configservers need databases to be created?
17:58 puppyMonkey joined
17:58 timg__ joined
18:01 Sasazuka joined
18:03 blizzow joined
18:19 kexmex joined
18:22 philipballew joined
18:23 sQVe joined
18:33 philipballew joined
18:53 s2013 joined
19:04 Liara- joined
19:12 Sasazuka joined
19:23 kba_ joined
19:24 damnlie_ joined
19:24 edrocks joined
19:28 chasepeeler joined
19:37 ggherdov` joined
19:39 schlitzer joined
19:40 sjums_ joined
19:40 philipballew joined
19:40 Mantis_ joined
19:46 pxed joined
20:04 Letze joined
20:11 rendar joined
20:11 rendar joined
20:16 svm_invictvs joined
20:16 okapi joined
20:16 philipballew joined
20:26 Sasazuka_ joined
20:45 pxed joined
20:55 Sasazuka joined
21:07 pythonholum joined
21:07 Sasazuka_ joined
21:08 <pythonholum> Is there a way to group by minute in mongo, Or a way to get the unix timestamp from a date?
21:09 <pythonholum> I have found group by $minute but that is just 0 - 59, so it is useless if I am spanning more then an hour or a day
21:10 <pythonholum> I found a solution but it is ugly ( Adding an muliplying Year * 10000000 , month * 1000000, day of month * 10000, hour * 100 and minute )
21:12 wartdev joined
21:15 orbyt_ joined
21:27 sz0 joined
21:29 StephenLynx joined
21:30 philipballew joined
21:52 okapi joined
21:55 jeffreylevesque joined
21:58 Sasazuka joined
22:07 edrocks joined
22:20 frenc left
22:26 Sasazuka_ joined
22:35 pxed joined
22:49 blizzow joined
23:01 GothAlice joined
23:01 GothAlice left
23:01 Sasazuka joined
23:27 re1 joined
23:36 fels joined
23:40 philipballew joined
23:44 yengas joined
23:45 <yengas> Hey guys. I am writing a docker-compose file for my project to run mongodb with a initial database for testing.
23:45 <yengas> however the mongorestore command sometimes fail, but often does not
23:45 <yengas> i use `docker-compose down -v && docker-compose up` to remove everything and start up a new container
23:46 Sasazuka joined
23:46 <yengas> https://gist.github.com/Yengas/53c4fb6f95ad2ec0d26a69395eab355d
23:47 <yengas> this is the error i get.
23:47 <yengas> the only thing i can think about it is that the bulk operation maybe processing some documents in different orders on `docker-compose up` and that maybe why the error is inconsistent
23:55 <jeffreylevesque> do mongodb configservers need databases to be created?
23:58 artok joined