<     May 2017     >
Su Mo Tu We Th Fr Sa  
    1  2  3  4  5  6  
 7  8  9 10 11 12 13  
14 15 16 17 18 19 20  
21 22 23 24 25 26 27  
28 29 30 31
00:15 bturker joined
01:18 bturker joined
01:33 bturker joined
02:44 cstrahan joined
02:47 glickbot joined
03:20 bturker joined
03:21 kota__ joined
03:42 bturker joined
03:57 mrallen1 joined
03:59 bturker joined
04:06 jcoene joined
06:00 bturker joined
06:59 jmeredith joined
07:15 bturker joined
08:10 Guest44 joined
08:23 kwmiebach___ joined
08:26 billstclair joined
08:35 bturker joined
08:47 schaary joined
09:04 Necromantic joined
09:15 andyt1 joined
10:12 bturker joined
12:29 codenamedmitri joined
12:39 bturker joined
14:07 codenamedmitri joined
14:37 andyt1 joined
14:51 codenamedmitri_ joined
15:13 codenamedmitri joined
15:23 <craque> russelldb: keep up the good work! :)
15:56 bturker_ joined
16:42 codenamedmitri joined
16:57 codenamedmitri_ joined
17:01 al-damiri joined
17:04 codenamedmitri joined
17:27 codenamedmitri joined
17:45 codenamedmitri joined
17:56 djnym joined
17:59 codenamedmitri joined
18:12 bturker joined
18:48 greenyouse joined
19:15 djnym joined
19:16 Necro|senseless joined
19:35 battlepanda joined
19:36 kleptocroc- joined
19:38 <battlepanda> I'm trying to build an application on riak_core. However, I'm struggling to find documentation for what happens exactly when a node fails. I know that partitions will get reassigned to another node. However, how and when does that process take place? Is it on demand? Does it simulate a handoff to the secondary vnode using existing replicas?
19:39 <battlepanda> If anyone could help me out, I'd greatly appreciate it
19:46 codenamedmitri joined
19:50 <russelldb> battlepanda: there's a #riak_core where you might get more play
19:52 <russelldb> but roughly what happens is that the node watcher spots the nodes are unreachable, and then your preflist will contain fallbacks, and sending work to them results in vnodes being started on those nodes
19:52 <russelldb> roughly
20:01 <battlepanda> @russelldb: Thanks for the reply. So aside from the updated preflist, the rest is manual?
20:06 <battlepanda> I guess I'm just confused then how riak kv handles it. If a node goes out, does it wait for a read/write op to a missing vnode before it will recreate the vnode on a different node? And then does it just pull from a replica? Is that part of the repair process?
20:08 <russelldb> battlepanda: the preflist is how kv ends up reading/writing to a vnode. The vnode management tick is how a vnode decides if it is "at home" (based on the ring), handoff is how data get's "home" read repair is one form of anti-entropy, AAE is another.
20:09 <battlepanda> Gotcha. Thanks, that helps
20:13 codenamedmitri joined
20:56 codenamedmitri joined
21:35 Necromantic joined
21:57 greenyouse joined
21:58 <greenyouse> Hey, if I have a bucket with composite keys of userID:<timestamp> what's the fastest way of rolling up on the keys by userID? (or should the data be modeled differently? I'm new to Riak)
21:59 <greenyouse> If it helps the problem being solved is user events over time
22:00 Necromantic joined
22:10 Necromantic joined
22:11 Necromantic joined
22:15 <russelldb> greenyouse: there's a riak time series database
22:15 <russelldb> would that help?
22:18 Necromantic joined
22:23 <greenyouse> russelldb: thanks, I'll see if I can use that
22:46 greenyouse left
22:52 hive-mind joined