<    March 2017    >
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 28 29 30 31
00:08 permagreen joined
00:09 malaclyps joined
00:10 eacameron joined
00:13 Ayey_ joined
00:15 eacameron joined
00:15 eacameron joined
00:15 systemfault joined
00:15 louispan joined
00:19 eacamero_ joined
00:27 eacameron joined
00:29 louispan joined
00:34 eacameron joined
00:35 jorris joined
00:38 <tmciver> Hey folks. I'm using stack in a project and wanted to use the datetime lib (https://hackage.haskell.org/package/datetime-0.3.1) so I added it to my cabal file, but when I tried to compile I got an error (datetime must match -any, but the stack configuration has no specified version (latest applicable is 0.3.1)). Removing datetiem from the cabal file seemed to fix the problem. So two questions: 1
00:38 <tmciver> . does this mean that datetime comes in the Prelude? and 2. how could I have known that?
00:38 Levex joined
00:40 <MarcelineVQ> it means the resolver specified in your stack.yaml doesn't have datetime in it
00:41 <MarcelineVQ> you'll need to add datetime-0.3.1 to your extra-deps section of your stack.yaml to tell it to pull that version from hackage
00:42 louispan joined
00:43 <MarcelineVQ> stack is about reproduceable builds so it won't take things from hackage without being told to, instead it uses packages from specific resolver versions such as lts-0.8.5 https://www.stackage.org/lts-8.5 in order to always build with package versions known to work together
00:43 <MarcelineVQ> as an alternate option you can type: stack solver --update-config to have stack put datetime in your stack.yaml for you
00:46 \Mike joined
00:49 louispan joined
00:50 <tmciver> MarcelineVQ: Ah, thanks. That answered another long-standing question I had: how does stack know which versions of deps to use if I do not specify them? The answer is that they are defined by a resolver, correct?
00:51 <MarcelineVQ> correct, or it checks the hackage index which is why it knew there was a version that would work with your current setup " (latest applicable is 0.3.1)"
00:51 <MarcelineVQ> But it won't go and get it without being told to
00:52 <tmciver> So how does one know that a given lib is not part of a given resolver?
00:52 <MarcelineVQ> https://www.stackage.org/lts-8.5 slap your resolver version in there and those are the packages available out of the box
00:53 <MarcelineVQ> or as you've just seen stack will tell you if it's not in the resolver, the error could use some work though
00:54 <tmciver> MarcelineVQ: Cool. Thanks so much!
00:54 faberbrain joined
00:55 <MarcelineVQ> np
00:57 <tmciver> And I see now that I have to add datetime to *both* the cabal file and to stack.yaml. I swear I knew this stuff at one time.
00:57 Rodya_ joined
01:00 Levex joined
01:01 Decoy__ joined
01:03 cschneid_ joined
01:04 Ayey_ joined
01:07 Majiir joined
01:12 Levex joined
01:14 louispan joined
01:23 eacameron joined
01:25 eacameron joined
01:26 louispan joined
01:26 Levex joined
01:28 eacameron joined
01:30 uglyfigurine joined
01:34 dni- joined
01:38 louispan joined
01:41 Apocalisp joined
01:44 Youmu joined
01:55 Ayey_ joined
01:55 faberbrain joined
01:56 louispan joined
02:14 eacameron joined
02:15 wei2912 joined
02:17 louispan joined
02:17 zagieKant joined
02:18 uglyfigurine joined
02:21 <zagieKant> anybody home?
02:22 <geekosaur> nobody here but us monads
02:23 <geekosaur> (some of whom may be burritos)
02:23 <zagieKant> lol good to know
02:26 <qu1j0t3> WRONG ANSWER APPARENTLY
02:27 <geekosaur> "their humor is as inscrutable as their language"
02:27 <qu1j0t3> geekosaur: hey monad, are you having an........................... identity crisis?
02:28 <monochrom> I have a crisis or not being too associative. <duck>
02:34 nomotif joined
02:40 louispan joined
02:47 Ayey_ joined
02:52 louispan joined
02:52 Levex joined
02:54 hexagoxel joined
03:07 eacameron joined
03:09 shayan_ joined
03:11 hphuoc25 joined
03:18 hexagoxel joined
03:19 Levex joined
03:19 blissdev joined
03:23 dni- joined
03:27 Rodya__ joined
03:28 obh15 joined
03:29 Rizy joined
03:48 exferenceBot joined
03:52 hexagoxel joined
03:53 hphuoc25 joined
03:56 malaclyps joined
03:57 faberbrain joined
04:00 Rodya_ joined
04:00 cschneid_ joined
04:09 <obh15> Guys, if I have instance like this: instance Applicative Foo where pure a = Foo a
04:10 <obh15> how haskell knows that I want it to return with type Foo
04:10 <obh15> the pure function
04:10 takle joined
04:15 <systemfault> He stayed a good 3 minutes, perhaps someone should create #haskell-beginners-express
04:17 <qu1j0t3> "Service in 90 seconds or your /part is free."
04:19 aarvar joined
04:20 louispan joined
04:23 takle joined
04:30 takle joined
04:31 hexagoxel joined
04:34 Glooomy joined
04:35 Glooomy joined
04:35 takle joined
04:35 Glooomy joined
04:36 Glooomy joined
04:37 Glooomy joined
04:38 Glooomy joined
04:39 malaclyps joined
04:50 NeverDie joined
04:50 takle joined
04:58 hphuoc25 joined
05:01 hexagoxel joined
05:04 lithie joined
05:06 takle joined
05:10 emmanuel` joined
05:12 dni- joined
05:14 takle joined
05:21 takle joined
05:22 mounty joined
05:23 aarvar joined
05:28 Tene joined
05:28 Tene joined
05:29 rlpowell joined
05:36 Rizy joined
05:36 hexagoxel joined
05:38 aniketd joined
05:43 louispan joined
05:50 Rodya_ joined
05:56 uglyfigurine joined
05:59 faberbrain joined
06:03 eacameron joined
06:09 uglyfigurine joined
06:11 Pupnik joined
06:29 cschneid_ joined
06:33 hphuoc25 joined
06:35 takle joined
06:35 louispan joined
06:42 takle joined
06:49 hexagoxel joined
06:50 takle joined
06:50 Rodya_ joined
06:51 louispan joined
06:53 meandi_2 joined
06:59 zero_byte joined
07:01 dni- joined
07:08 louispan joined
07:10 takle joined
07:14 hdeshev joined
07:18 Ayey_ joined
07:18 kuznero joined
07:18 <kuznero> Hi All!
07:23 hexagoxel joined
07:27 Kuros` joined
07:28 Deide joined
07:28 eacameron joined
07:37 takle joined
07:39 Ayey_ joined
07:42 thc202 joined
07:43 takle joined
07:44 curious_corn joined
07:46 uglyfigurine joined
07:50 makufiru joined
07:50 takle joined
07:52 Ayey_ joined
07:56 faberbrain joined
07:58 Rodya_ joined
07:59 eacameron joined
08:04 kritzcreek_ joined
08:07 takle joined
08:13 mattyw joined
08:16 takle joined
08:16 eacameron joined
08:22 bluepixe1 joined
08:23 takle joined
08:24 yellowj joined
08:25 t0by joined
08:25 t0by joined
08:32 xmonader left
08:33 xmonader joined
08:35 suls joined
08:39 Miroboru joined
08:42 uglyfigurine joined
08:43 nacon joined
08:46 takle joined
08:46 ederign joined
08:50 dni- joined
08:51 takle_ joined
08:56 louispan joined
08:57 takle joined
08:59 Rodya_ joined
09:14 grayjoc joined
09:14 Rizy joined
09:15 takle joined
09:17 mattyw joined
09:18 grdryn joined
09:20 Durz0 joined
09:20 Glooomy joined
09:23 takle joined
09:24 eacameron joined
09:30 takle joined
09:37 gregman_ joined
09:47 takle joined
09:50 cschneid_ joined
09:54 takle joined
09:58 faberbrain joined
10:00 Rodya_ joined
10:00 takle joined
10:00 cur8or joined
10:02 harfangk joined
10:04 Rizy_ joined
10:04 Levex joined
10:05 takle joined
10:12 takle joined
10:20 takle joined
10:26 takle joined
10:29 dni- joined
10:36 zero_byte joined
10:43 hphuoc25 joined
10:44 geekosaur joined
10:52 galderz joined
10:52 merijn joined
11:00 Rodya_ joined
11:04 merijn joined
11:04 netheranthem joined
11:11 hexagoxel joined
11:21 jgertm- joined
11:23 mengu joined
11:30 geekosaur joined
11:34 geekosaur joined
11:54 louispan joined
11:59 jorris joined
12:00 faberbrain joined
12:00 mattyw joined
12:01 ederign joined
12:01 jmg joined
12:01 Rodya_ joined
12:02 mattyw joined
12:11 mengu_ joined
12:16 Apocalisp joined
12:34 mengu joined
12:36 Ferdirand joined
12:50 mengu joined
12:50 mizu_no_oto_work joined
12:54 <Geekingfrog> Is there a way to have multiple Source (conduit) running in parallel threads and coalesce them in a single Source? I'm currently doing that using a queue, but I'm having trouble with the cleaning up: closing the queue upon exception/cancellation
12:57 pie_ joined
12:58 pie_ joined
13:00 eacameron joined
13:00 merijn joined
13:01 faberbrain joined
13:02 Rodya_ joined
13:07 zero_byte joined
13:08 grayjoc joined
13:08 Ferdirand joined
13:25 \Mike joined
13:26 contiver joined
13:26 zero_byte joined
13:26 animated joined
13:43 <nitrix> MarcelineVQ: I think the basis is either a genetic algorithm or a neural network. I don't have a particuliar resource to recommend as I learned it indirectly from wandering around online, but once you do get an intuition for these two concepts, there's a plentitude of papers online that refines them for different purposes. (e.g. recurrent / convolutional / long-short term memory / etc, neural
13:43 <nitrix> networks).
13:44 jorris joined
13:45 <nitrix> Then there's people trying to apply these concepts to language processing, character recognition, road signs, etc. Depending what information is being processed, sometimes it's better to reduce some noise and extract features first, etc. The intuition is relatively easy to develop once you understand the operational semantics, since you'll be able to predict more easily what will or wont work for
13:45 <nitrix> your problem.
13:46 <nitrix> MarcelineVQ: There's new advances everyday though. I'm pretty sure everyone does slightly different variations tailored to their problem.
13:46 <nitrix> I woundn't mind giving a quick breifing of how GAs and NNs work :P
13:47 mengu_ joined
13:48 <nitrix> For my AI bot, I used a long-short term memory neural network. It has the capability to store information for later; which is awesome for predicting time series.
13:52 Gurkenglas_ joined
13:54 pbrant joined
13:56 eacameron joined
13:57 Glooomy joined
14:02 mizu_no_oto_work joined
14:03 Rodya_ joined
14:06 merijn joined
14:10 carlomagno joined
14:13 mengu joined
14:14 Ayey_ joined
14:22 <qu1j0t3> This is a good overview of the field imho https://www.youtube.com/watch?v=IbjF5VjniVE
14:23 Levex joined
14:24 snowcrshd joined
14:30 guampa joined
14:32 Ayey_ joined
14:33 mizu_no_oto_work joined
14:33 hphuoc25 joined
14:35 pie_ joined
14:35 ederign joined
14:36 contiver_ joined
14:42 vmeson joined
14:43 Ferdirand joined
14:59 uglyfigurine joined
15:02 ederign joined
15:03 faberbrain joined
15:03 Rodya_ joined
15:04 cschneid_ joined
15:07 Levex joined
15:16 skeet70 joined
15:20 systemfault joined
15:21 conal joined
15:33 takle joined
15:38 Levex joined
15:38 mimi_vx joined
15:39 expo873 joined
15:48 hphuoc25 joined
15:52 Ayey_ joined
16:00 Ayey_ joined
16:03 systemfault joined
16:09 conal joined
16:11 yellowj joined
16:12 faberbrain joined
16:21 ederign joined
16:25 Glooomy joined
16:29 pie_ joined
16:34 decaf joined
16:36 Ayey_ joined
16:40 conal joined
16:41 grayjoc joined
16:47 faberbrain joined
16:48 faberbrain joined
16:50 cur8or joined
16:50 Levex joined
16:56 Ayey_ joined
16:58 <MarcelineVQ> nitrix, qu1j0t3: have you seen MarI/O ? it's pretty wizard
17:01 wildlander joined
17:02 <qu1j0t3> no
17:05 Rodya_ joined
17:05 mizu_no_oto_work joined
17:08 uglyfigurine joined
17:10 Deide joined
17:14 simendsjo joined
17:14 Ayey_ joined
17:14 Levex joined
17:34 nacon joined
17:38 decaf joined
17:38 throwaway432 joined
17:40 ederign joined
17:42 lithie joined
17:46 Ayey_ joined
17:49 malaclyps joined
17:50 t0by joined
17:53 Ayey_ joined
18:04 Ayey_ joined
18:05 Rodya_ joined
18:06 nadirs joined
18:14 grayjoc joined
18:17 ederign joined
18:18 takle joined
18:24 merijn joined
18:26 takle joined
18:27 <nitrix> MarcelineVQ: Are you here?
18:30 Rodya_ joined
18:37 <nitrix> :(
18:43 Ayey_ joined
18:46 grwn joined
18:52 hphuoc25 joined
18:58 albertus1 joined
19:01 delexi joined
19:02 takle joined
19:05 MotherFlojo joined
19:06 guampa joined
19:07 takle joined
19:09 Rodya_ joined
19:09 grwn joined
19:10 Levex joined
19:12 malaclyps joined
19:15 Ayey_ joined
19:24 Ayey_ joined
19:28 vmeson joined
19:30 haskellnewb joined
19:38 Ayey_ joined
19:40 <haskellnewb> Hey guys, i could need some help with my selector: http://pastebin.com/jUFj0p2K It just selects the element from a list which hast a distance smaller than some value eps to its successor. Now i would like to know if i can use this selector on a list of tuples [(x,y)] considering its range.
19:42 <jle`> haskellnewb: you probably can't, but you can write a generalized vrsion of both
19:42 merijn joined
19:43 <jle`> instead of directly using abs(x-y) you can abstract over it as a comparator function
19:43 <haskellnewb> jle`: but i need to write a second version of it?
19:43 cur8or joined
19:48 <nitrix> haskellnewb: Peharps ditch your `eps` parameter and replace it with a function that does the `\eps -> abs(x - y) <= eps`
19:48 <nitrix> sorry `\x y eps -> abs(x - y) <= eps`
19:49 mengu joined
19:49 <haskellnewb> or maybe is it possible to use this selector function to select a tuple from a list whose range has a distance to its following tuples range smaller some eps
19:49 <jle`> haskellnewb: write a version that takes in the comparing function as an argument
19:49 <nitrix> haskellnewb: This will let you have different comparators, e.g. \(x1, x2) (y1, y2) eps -> abs(x1-y1) + abs(x2-y2) <= eps
19:49 <jle`> haskellnewb: instead of select :: a -> [a] -> a
19:49 <haskellnewb> that would be a good idea, but we are not allowed to change it like that
19:50 <jle`> haskellnewb: try selectWith :: (a -> a -> Bool) -> [a] -> a
19:50 <jle`> and your original select would be select eps = selectWith (\x y -> abs (x - y) < eps)
19:50 <nitrix> haskellnewb: What are the constraints you're working with? You can define your own function selectBy in a where clause and have the implementation of select use that.
19:51 grwn joined
19:52 <nitrix> Not that it'd be very useful though, if you can't leverage the whole generalization it offers.
19:54 <haskellnewb> nitrix: the constraint is that the function should be called like this: "select eps list". we are shall use it on lists with numbers which works fine, but we also shall use it on lists of tuples... which i don't understand how it can be done with only one select function.
19:54 boris_rh joined
19:55 <nitrix> haskellnewb: With the current type of select, it's not general enough to do this, as far as I know.
19:56 <haskellnewb> is it possible to do it more general? because it says we should implement select as general as possible
19:56 Ayey_ joined
19:56 <haskellnewb> i thought using 'a' as instances of Num and Ord is the most general way
19:56 <haskellnewb> *instance
19:56 <nitrix> The reason being that tuples don't subtract.
19:57 <nitrix> Are you sure this is the complete type signature of that function?
19:57 <haskellnewb> yes yes, i know that.
19:57 <nitrix> Not even an Eq a, or Ord a constraint?
19:57 <nitrix> Ah there you go! Your version is more sane than jle` suggested.
19:57 <haskellnewb> We are allowed to change the signature, but it should be able to call it with "select Double List"
19:58 <haskellnewb> so, i can not pass a function etc.
19:58 <nitrix> haskellnewb: Can you show me how you'd order tuples?
19:59 Gurkenglas joined
19:59 <nitrix> In Haskell, tuples can be ordered, but you wont be able to have the "distance" between them with that abs (x-y) trick.
19:59 <haskellnewb> the tuples are ranges. so the order depends on the range b-a if the tuple is (a,b)
20:00 <nitrix> In other words, tuples don't have Num instance.
20:00 <nitrix> haskellnewb: I see.
20:00 <haskellnewb> yes, i also know that that, but that's where my understanding ends..
20:00 <nitrix> haskellnewb: So if you have tuples, you're not interested with the previous element?
20:01 <haskellnewb> exactly
20:01 <haskellnewb> i just compare the acutal tuples range with the range of the next one
20:01 <nitrix> e.g. your current solution was pattern matching on x:y:_ ... ah gotcha :P
20:01 <haskellnewb> yes exactly
20:01 <nitrix> jle`: Still here?
20:02 dni- joined
20:02 geppettodivacin joined
20:02 <haskellnewb> i know how to use this selector to get the distance.. but not the tuple itself
20:03 Ayey_ joined
20:03 <nitrix> There's a few ways to proceed, but I block at one place with the ad-hoc polymorphism.
20:03 <nitrix> I wouldn't want to tell you to create a type class and mislead you.
20:04 <nitrix> Have you guys even seen type classes yet?
20:04 <haskellnewb> i think with typeclass i could implement the functions for (-) and <= etc.
20:04 <haskellnewb> if i understand that correctly
20:04 <haskellnewb> but it is not mentioned to use them
20:04 <haskellnewb> moreover, the intervals are given types
20:04 <nitrix> That's what worries me.
20:04 <haskellnewb> type Interval = (Double, Double)
20:05 <nitrix> haskellnewb: Would you mind updating an lpaste.net with as much information as you can?
20:05 <haskellnewb> i will
20:06 <nitrix> haskellnewb: As it currently stand, your function cannot distinguish between an Interval or an Int for example, yet it promises to work for all types `a` that are `Ord`'erable and `Num`'eric.
20:07 <nitrix> Tuples typically aren't seen as numeric, but we can order them.
20:07 <haskellnewb> yes exactly, i understand that
20:07 <haskellnewb> i am just copying together the stuff for this one exercise
20:07 <nitrix> But it wouldn't be the order you'd expect of \(x,y) -> x - y
20:08 <nitrix> Alright, waiting for more info :)
20:09 nil_ joined
20:12 <haskellnewb> ok, i have added the descirption and my solution is the last function: http://lpaste.net/6701125472639516672
20:12 <nitrix> The trickiest part is that sometimes you want to look at the relation between the elements of a list to find the ranges, and sometimes the elements themselves are the ranges.
20:12 <nitrix> He couldn't possibly have some trickery in mind; it'll probably end up being type classes.
20:12 <nitrix> Reading.
20:12 Ayey_ joined
20:13 <haskellnewb> yes, i also think there is some trickery.. this shouldn't be a hard exercise. i think i am missing something
20:13 <nil_> Is there such a thing as a "diff combinator library" yet? As in "roll your own difftool"?
20:14 geppettodivacin joined
20:15 <nitrix> haskellnewb: Possibly you've just cornered yourself due to your implementation.
20:15 <haskellnewb> the implementation of the selector 'within'?
20:17 <nitrix> haskellnewb: So, let's start figuratively. You have some initial value (a,b) like (42, 69), and m = (a+b)/2, so m = 76.5.
20:17 zero_byte joined
20:18 <haskellnewb> yes, i have an initial range, i calculate m from it
20:18 <nitrix> haskellnewb: If f(42) and f(76.5) have different signs we preceed with the next interval (42,76.5), otherwise we proceed with (76.5, 69)
20:18 <nitrix> Basically figuring out if we have to go higher or lower with our future intervals.
20:18 <haskellnewb> yes exactly, we just simplify that there will be a zero crossing in the intiial value. for the sake of simplicity
20:19 <haskellnewb> yes exactly.. like a binary search
20:19 <haskellnewb> the goal is to do this until 2 consecutive ranges in the list only differ by at most eps
20:19 <nitrix> And then there's this condition that if the last two interval ranges tested were smaller than eps, we stop the algorithm.
20:19 <haskellnewb> yes exactly
20:19 <nitrix> Yeah if you haven't spent too much time, you might want to scrap what you have x]
20:20 <haskellnewb> no it's no problem to scrap it, but i think i miss something important
20:20 <nitrix> haskellnewb: How about writing the algorithmic steps as comments and then we can attempt to turn those into actual Haskell? If you're used to other languages, that might help you.
20:20 <nitrix> And this way we can tackle one challenge at a time.
20:20 <nitrix> haskellnewb: You shouldn't have to get stuck on some select function that needs to handle two different type of ranges.
20:21 <nitrix> You're definitely conflating something with the problem statement. The intervals and the ranges aren't supposed to be handled by the same function.
20:21 <nitrix> The range stuff is used as a short-circuit for the algorithm. The algorithm is independent.
20:22 <nitrix> As far as I can tell.
20:22 Ayey_ joined
20:22 <nitrix> I'll try to implement it for fun. Don't wait for me :)
20:22 <haskellnewb> yes yes, i also think that thi sis the case
20:22 <haskellnewb> but it is explicitly stated that we should the within function from the first exercise
20:22 <haskellnewb> in the first exercise we only used it for numbers
20:23 <haskellnewb> i am very confused...
20:23 <nitrix> What part of that code is provided?
20:23 <nitrix> within, the types, nextinterval and intervalnesting?
20:23 <haskellnewb> nothing, but we know for example that within is called as i said
20:23 <haskellnewb> the types are given
20:23 <haskellnewb> nextinterval is also given, the signature
20:24 <nitrix> haskellnewb: I think you should be giving to within only ranges.
20:25 <nitrix> haskellnewb: In the case where you'd be tempted to pass it a list of intervals, turn the intervals into ranges first.
20:25 <haskellnewb> nitrix: and in the case when i just pass values?
20:26 <haskellnewb> wait.. is ranges some built-in type?
20:26 <nitrix> > map (\(x,y) - abs(x-y)) [(42, 69)]
20:26 <lambdabot> <hint>:1:13: error: parse error on input ‘-’
20:26 <nitrix> > map (\(x,y) -> abs(x-y)) [(42, 69)]
20:26 <lambdabot> [27]
20:27 <haskellnewb> yes, that's what i do in the function 'null'
20:27 <nitrix> And within would not have that abs(x-y) at all.
20:27 <haskellnewb> but then i only get the distance as a result, since within returns the value in the list
20:28 <nitrix> Mhhh...
20:28 <haskellnewb> the thing is, they don't say what null should return
20:28 <nitrix> What are the types and functions provided?
20:28 <haskellnewb> the just said it should use the selector within and combine it with the generator .. but i am pretty sure the reutrn value should be the range
20:28 <nitrix> I'll try to solve it within the constraints too.
20:29 <nitrix> The implementations are missing obviously, right?
20:30 <haskellnewb> ok, from the pastebin, take the 3 type definitions, nextinterval signature, within should just be used like this: "within value list" you can choose the signature on your own. intervalnesting :: Map -> InitialInterval -> [Interval] is given
20:31 <haskellnewb> yes, all implementations are missing, but they are informally given in the book
20:31 <haskellnewb> within for example is given in the book, and looks exactly like this transalted to haskell
20:31 expo873 joined
20:31 Ayey_ joined
20:32 <nitrix> within goes through a list of ranges and keeps going until it finds one smaller than eps.
20:33 <haskellnewb> the difference between 2 consecutive elements must be smaller than or equal to eps
20:33 <nitrix> I think that's useful, but it shouldn't have anything to do with intervals.
20:33 uglyfigurine joined
20:34 <nitrix> Maybe you're just supposed to adapt the type of within?
20:34 <zaquest> within probably should take a list of intervals and return an interval, it's still `within value list` just a list of tuples
20:35 <haskellnewb> but it doesn't sound like we should write a new version
20:35 <haskellnewb> "combine the generator with the selector from the previous exercise"
20:35 <nitrix> zaquest: I'm thinkin that too.
20:35 <haskellnewb> the exercises are all on the same haskell file
20:35 <nitrix> It must be the inverse of my suggestion earlier.
20:36 <zaquest> so is it provided? i thought all is known about `within` is that it should be used like `within value list`, no?
20:36 freechips joined
20:36 <haskellnewb> it is not given, we had to implement it zaquest
20:36 <nitrix> Ah...
20:36 <nitrix> That's painful.
20:37 <haskellnewb> well, maybe they just really don't want the interval as a result, just the distance of it...
20:37 <nitrix> http://lpaste.net/353613
20:37 <haskellnewb> than everything would be fine
20:37 <nitrix> Can you confirm this is accurate?
20:37 <zaquest> ok, lets try going a little deeper: what was the previous exercise? :)
20:37 <haskellnewb> i wll re-check it moment
20:38 <haskellnewb> yes that'S correct nitrix
20:38 conal joined
20:38 <haskellnewb> zaquest: it was the previous exercise on this sheet. we had to implement different generators and use them with the selector within
20:38 <haskellnewb> the generators e.g.:
20:39 uglyfigurine joined
20:39 <haskellnewb> they were just generating lists of numbers, of Doubles to be more precise
20:39 <haskellnewb> approxmating the sqrt of x
20:39 <haskellnewb> or approx. differentiate
20:40 <nitrix> -- This is done until the distance of 2 consecutive intervals are smaller than eps.
20:40 <haskellnewb> everything is pretty easy on this sheet
20:40 <zaquest> one more suggestion is that you calculate the distance between intervals as the distance between their midpoints, in that case you'll at least get the root of the equation f(x)=0 (it appears to me that's the intention)
20:41 <zaquest> approximation of the root
20:42 <haskellnewb> zaquest: yes, i see what you mean. but than i would also just get the midpoint, not the interval
20:42 <haskellnewb> at the moment i pass the distance, and as a return value i geht the distance, not the range
20:42 <nitrix> What bothers me is `within`.
20:42 <haskellnewb> me too nitrix
20:42 <nitrix> It gives you the first smallest value than eps.
20:42 <nitrix> But it should be boolean.
20:43 <nitrix> Something somewhere, should be checking that your interval is `within` some value esps.
20:44 <nitrix> Because this is solely used as a predicate to short-circuit the algorithm, right?
20:44 peterbecich joined
20:44 <nitrix> You don't need the value or the interval in itself, you can easily manipulate (a, b), (a+b) /2, and f(a) and f(m).
20:45 <nitrix> I don't know why we're obsessed by the result value of `within`.
20:45 <zaquest> haskellnewb, it doesn't say anywhere that the result is interval, does it?. it shows we're looking for for the zero crossing. this is impossible in general with fixed precision, this algorithm is bisection method, it looks for the approximation.
20:45 <zaquest> says*
20:45 <haskellnewb> zaquest: yes, it is just an approximation.
20:45 <haskellnewb> nitrix: well, the implementation of within is informally given in the book, i will post it mom.
20:46 fhoffmeyer1 joined
20:46 <haskellnewb> http://lpaste.net/6985828684208799744
20:48 Ayey_ joined
20:48 <nitrix> This would be so much easier if you could just turn the intervals into distances, pass the distances to `within`, assuming it's not doing `abs (x-y)` anymore and you just obtain the smallest value <= esp, and you do a check somewhere with the result to know if you terminate your recursion or not...
20:49 <nitrix> I feel cornered D:
20:49 <haskellnewb> zaquest: do you mean 'null' shouldn't return the interval? that's what i am not sure, the sheet just says, combine the generator with the selector within so that the procedure terminates when the difference of the consecutive elements is <= 0
20:49 <peterbecich> Hello, I'm attempting to use replicateM to make a ten round Morra game. The game only lasts five rounds, though. http://lpaste.net/353614 Code: https://github.com/peterbecich/haskell-programming-first-principles/blob/master/src/Morra.hs#L46-L56 Thank you!
20:50 <haskellnewb> nitrix: i totally agree
20:51 <Lazersmoke> what are you asking peterbecich?
20:51 NoCreativity joined
20:52 <peterbecich> Why replicateM isn't replicating `morraRound` ten times
20:52 hphuoc25 joined
20:52 <haskellnewb> nitrix: zaquest thank you guys for your time and effort
20:53 <zaquest> haskellnewb, yes. well, it does say that we're looking for the zero crossing. crossing is a single point, usually... and since the selector within doesn't quite make sense for tuples i think a single point is required
20:53 <nitrix> haskellnewb: I'm sorry I couldn't be more helpful. I really feel cornered here.
20:53 <Lazersmoke> your code is pretty funky; why are you using an explicit StateT constructor on line 53?
20:53 <nitrix> Lazersmoke: Line 56 with the >> get serves no purpose.
20:54 <haskellnewb> zaquest: that's an interesting point i didn't consider until now. maybe you are right and it's meant to be used with (a+b)/2 as the list's entries
20:55 <haskellnewb> nitrix: i am pretty sure that i missed something on the exercise if you guys also say that it doesn't make sense like i understood it
20:55 <nitrix> peterbecich: You've entangled yourself in StateT a little too much. Is this your first foray with monad transformers?
20:56 <haskellnewb> zaquest: but it's stated that the difference of two consecutive intervals has to be <= eps
20:56 <nitrix> peterbecich: Ideally, morraRound would be a stateful computation, so yeah you can use the monadic context `do` like you are. Things that requires IO are going to be lifted with liftIO, and modification of the state within the monadic context are going to happen with `modify` or `set`.
20:57 <nitrix> peterbecich: Ideally, that StateT (\_ -> out) nonsense would get removed.
20:57 <Lazersmoke> it would probably be much easier to spot the problem if you wrote your code in a more standard style, using liftIO for IO things instead of liftM2
20:58 <nitrix> peterbecich: If you do that, you'll have a computation that has both IO effects as well as being stateful.
20:58 <zaquest> haskellnewb, yes, but what is the difference between 2 consecutive intervals? you can define it as a distance between their midpoints.
20:58 <nitrix> peterbecich: Then, you use tenMorraRounds to say that it's replicateM 10 morraRound.
20:58 <nitrix> peterbecich: Then, game will run that computation with evalStateT, and the result will be given to `winner`.
20:59 <Lazersmoke> does this even typecheck? out looks like it should not because the state type is Score, but out is IO (String,Score)
20:59 <nitrix> peterbecich: sorry, I meant runStateT.
21:00 <nitrix> peterbecich: Does this helps? I can walk you through and explain what's going on.
21:00 <haskellnewb> zaquest: that's a good point. i think that's the only way the exercise makes sense
21:00 <Lazersmoke> peterbecich: use `liftIO :: IO a -> Morra a` to allow building your IO computation in do notation as opposed to inside a bunch of let's
21:00 <peterbecich> Thanks for your responses nitrix and Lazersmoke. I've dabbled with monad transformers before in Scala. Nothing beyond exercises. I'm probably making the same mistakes here as I did in Scala.
21:01 <nitrix> peterbecich: Okay let's start together with the morraRound :D
21:01 <haskellnewb> zaquest: i will implement it like this. but i would appreciate if they would give a more precise exercise next time...
21:01 <nitrix> peterbecich: Lazersmoke https://coderpad.io/MAF264KT
21:01 <peterbecich> nitrix: sounds great :)
21:01 Ayey_ joined
21:05 <peterbecich> I wasn't expecting so much help. Thank you nitrix and Lazersmoke!
21:06 <haskellnewb> yes, this place is very nice to get help.. :)
21:07 <Akii> @karma+ nitrix
21:07 <lambdabot> nitrix's karma raised to 30.
21:07 <Akii> @karma+ Lazersmoke
21:07 <lambdabot> Lazersmoke's karma raised to 1.
21:08 <haskellnewb> @karma+ nitrix
21:08 <lambdabot> nitrix's karma raised to 31.
21:08 <haskellnewb> @karma+ zaquest
21:08 <lambdabot> zaquest's karma raised to 1.
21:08 Ferdirand joined
21:09 pilne joined
21:10 Ayey_ joined
21:16 Ayey_ joined
21:16 uglyfigurine joined
21:18 <Lazersmoke> @karma+ nitrix
21:18 <lambdabot> nitrix's karma raised to 32.
21:18 <Akii> :D
21:18 <Akii> and you get karma, and you get a karma; everyone gets a garma
21:18 <Akii> karma*
21:19 uglyfigurine joined
21:19 <haskellnewb> well.. karma's a bitch ^^
21:19 <Akii> Karma should always have a cat picture attached
21:19 <Akii> and that's where you're wrong
21:19 <Akii> could also be a cat!
21:19 <haskellnewb> xD
21:19 <Akii> data Karma = Cat | Bitch
21:20 <Akii> you'll never know
21:20 <Lazersmoke> https://cdn.meme.am/cache/instances/folder646/500x/60283646.jpg
21:20 <Akii> and some fancy category theorist will now create a natural transformation
21:20 <Akii> or smth
21:20 <nitrix> Shrodinger's karma.
21:20 <nitrix> Might be a Cat, might be a Bitch.
21:20 <haskellnewb> lol you guys are funny :D
21:20 <Akii> and then be like "the cat is just a bitch in the category of not giving fucks"
21:21 <Akii> and there's a library for that
21:21 <nitrix> Akii: Cats never gives fucks, so we should add it to your acme-fucks package.
21:21 <Akii> yes, we should
21:22 <nitrix> peterbecich: The coderpad expired. I swear that site used to be a lot better.
21:22 <Akii> type Cat = Fucks though
21:22 <Akii> although Cats have effects
21:22 <Akii> more like FucksT
21:23 <MarcelineVQ> nitrix: usually :>
21:23 <MarcelineVQ> oops you're talking, that could be confusing. *I'm usually here
21:23 <Akii> anyhow, gtg; gn!
21:23 initiumdoeslinux joined
21:24 <nitrix> MarcelineVQ: How much do you know about GAs and NNs ?
21:24 <peterbecich> nitrix: I saved the work. I will copy it to lpaste and my Github repo
21:24 <peterbecich> @karma+ nitrix
21:24 <lambdabot> nitrix's karma raised to 33.
21:25 <nitrix> MarcelineVQ: I can probably do a short quick breifing of either, maybe both, within 10 minutes :P
21:25 <MarcelineVQ> nitrix: about that much as https://www.youtube.com/watch?v=qv6UVOQ0F44 covers so only enough to get interested
21:25 <peterbecich> @karma+ Lazersmoke
21:25 <lambdabot> Lazersmoke's karma raised to 2.
21:25 <MarcelineVQ> I've just barley started reading http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf as well
21:25 <nitrix> MarcelineVQ: I can explain neural networks very simply then.
21:25 mizu_no_oto_work joined
21:26 <nitrix> MarcelineVQ: The simplest neural network is called a perceptron.
21:26 metalbot joined
21:27 <nitrix> MarcelineVQ: Almost all NN have the same topology. A directed graphs, divided in layers. The first layer are the inputs, the middle layer is hidden from you, the last layer are the outputs.
21:28 <nitrix> MarcelineVQ: The human brain learns by re-enforcing the connection between neurons that produces successful outcomes.
21:29 Ayey_ joined
21:29 <nitrix> MarcelineVQ: So we'll connect our graph such that each nodes in a layer connects to all nodes in the next layer. These connections (the edges of the graph), are assigned weights.
21:29 <peterbecich> Here are all of your revisions. I added one import and fixed the signature of `game`. Thanks so much nitrix and Lazersmoke! I will do the exercise left for me
21:29 <nitrix> MarcelineVQ: As inputs are received, the input values are propagated down the graph, getting multiplied by the weight of the connections as it travels them.
21:30 <peterbecich> Curiously it still seems to run only five rounds. I will try to understand why: http://lpaste.net/728068622496301056
21:30 <peterbecich> thanks again!
21:31 <nitrix> MarcelineVQ: The last element, nodes may or may not stop the flow of what goes through them.
21:31 <peterbecich> I mean, it only takes five inputs from the player. It plays ten rounds. I'm not sure where it gets the missing five inputs from the player.
21:32 <peterbecich> forgot link. Revisions: https://github.com/peterbecich/haskell-programming-first-principles/blob/master/src/Morra.hs
21:32 <Lazersmoke> the missing five inputs are probably the newlines from getChar if I had to guess
21:32 <nitrix> MarcelineVQ: They have an activation function that works by summing all the values it receives and passing the result to the activation function. If the function determines we have to fire, we do, otherwise we don't.
21:32 <geekosaur> my guess also
21:32 <nitrix> MarcelineVQ: That's a very layman description of how it works operationally.
21:33 <geekosaur> but switching input to character at a time mode is platform dependent
21:33 <peterbecich> that makes a lot of sense. I need to use getLine.
21:33 <haskellnewb> for a bedtime reading: http://neuralnetworksanddeeplearning.com/chap1.html this is a pretty nice tutorial about NN
21:33 <nitrix> MarcelineVQ: Now you must be wondering how does this enables the AI to learn anything?
21:34 <Lazersmoke> nitrix: what sorts of values are going through the network (arbitrary?), and what does multiply mean in this context?
21:34 Ayey_ joined
21:35 <nitrix> Lazersmoke: Booleans, Ints, Doubles, positive or negative, it doesn't matter.
21:35 <nitrix> Lazersmoke: You will though, need to pick an activation function that covers the full range though.
21:35 <nitrix> e.g. if you have values that are positive and negative, you probably do not want something like x >= 0.5
21:36 conal joined
21:36 <MarcelineVQ> nitrix: I was, I was wondering how the conncetions are strengthened when a correct answer is determined
21:36 <nitrix> Lazersmoke: I can talk about it after :)
21:37 <Lazersmoke> ok sounds good :)
21:38 <nitrix> MarcelineVQ: So for the NN to be any useful it has to learn what it needs to solve. We can supervise the neural network or we may not. Ideally, we surpervise it and give it a bunch of inputs, for which, the neural network will produce an output. Then we look at the output and we determine if it is correct or not (that's the supervision part).
21:40 <nitrix> MarcelineVQ: If the network is correct, we can backtrace from the output, all the nodes that contributed to producing the correct answer and increase the weights of the connections that leads to them, because, well, the network is doing great and we want to encourage the same behavior.
21:40 <nitrix> Similarly, if the answer is incorrect, we backtrack and punish only the nodes that contributed to giving a wrong answer and lower the weight of its connections.
21:41 <nitrix> Through enough samples, the network will regulate and some nodes will specialize to recognise such and such elements and fire for them or not.
21:41 <nitrix> Thus, eventually some form of decision tree / logical circuit arise.
21:42 <nitrix> MarcelineVQ: There is though, better methods.
21:42 <nitrix> For example, I'm using a genetic algorithm.
21:43 <nitrix> Genetic algorithm are good for optimization problems. And evolving a competant AI is an optimization problem.
21:45 conal joined
21:46 <nitrix> MarcelineVQ: Is this any helpful? I have a tendency to stick to very high level generalization until I guess more specific questions.
21:46 <MarcelineVQ> yep
21:47 <MarcelineVQ> I just don't have anything to ask because I've not tried any of this yet :>
21:47 <Lazersmoke> how do you model data in a way such that you can multiply it by a weight? what if your training set is non-numeric for example?
21:47 <Lazersmoke> what does it mean to multiply an audio clip by a weight
21:49 <jle`> nitrix: sorry, had to run a bit for a midterm :)
21:50 conal joined
21:51 dni- joined
21:52 <nitrix> MarcelineVQ: Do you know genetic algorithms?
21:52 <MarcelineVQ> I know of them but I've never made one
21:52 <nitrix> MarcelineVQ: Okay, well it's enough to say what I wanted to say.
21:52 hiratara joined
21:52 <nitrix> MarcelineVQ: Pretend the individuals in your population that you're evolving are all neural networks.
21:54 <nitrix> MarcelineVQ: You generate neural networks randomly, test them, score them, keep the best, drop the worst, create new neural networks from two successful neural networks (all of this is probabilistically), you eventually end up with a very, very smart neural network.
21:54 <nitrix> In no time :3
21:55 kadoban joined
21:56 Decoy__ joined
21:56 madjestic joined
21:57 <nitrix> Lazersmoke: You often have to break it down.
21:57 <nitrix> Lazersmoke: For example, an image, you'd break it down into each pixels.
21:58 <nitrix> Lazersmoke: Which you'll see doesn't scale too well for large amount of inputs.
21:58 <nitrix> So for image processing, people are doing something called convolutional neural networks mostly.
21:59 <nitrix> They break the image down into chunks, extract features from those chunks, then the features (which is much less information) is what's given to the NN.
21:59 <MarcelineVQ> an idea that came up when considering nn's was one who's neurons could replicate, cell division iow, I'm not sure what would make sense for triggering that though. a single celled organism just needs enough energy to do that I think, idk how neurons are about it
21:59 <Lazersmoke> isn't it significant that certain pixels are near to certain other pixels? Wouldn't it remove that information if it is broken down?
22:00 <nitrix> e.g. one NN for finding road signs on a picture, then only that road sign is fed to the fully connected NN.
22:00 <nitrix> Lazersmoke: Nope.
22:00 <nitrix> Lazersmoke: Since all nodes of your input layer connects to all nodes of the next layer.
22:00 <nitrix> Lazersmoke: The order becomes irrelevant.
22:01 <Lazersmoke> yeah that makes sense
22:01 <nitrix> The network will learn to give it some meaning.
22:02 <nitrix> MarcelineVQ: Yeah. People experiments with dynamic graphs, where nodes and connections are added, removed, etc.
22:02 <nitrix> There's an overwhelming amount of papers and there's not a "one fit all" solution unfortunately :/
22:05 <nitrix> Then there's deep learning, where, when you have Google's computational power, you middle layer is actually, multiple layers, 20, 50, 1000, layers maybe.
22:06 <nitrix> The idea being that it gives more room for complex pattern decisions to arise.
22:08 Ayey_ joined
22:09 <nitrix> Something akind to "If A, but not B, and C > D when D != 0". The more complex the rule to learn, the more layers needed. Scaling in width bigger than your input size or output size, makes very little sense from an information theory standpoint.
22:09 <nitrix> You /really/ can't create more information from what you were given.
22:09 <nitrix> You can just transform it or narrow it down.
22:10 NoCreativity joined
22:10 prophile joined
22:10 prophile joined
22:11 <monochrom> Actually the trick is to drop information, so you spot a general trend instead of fine-tuning.
22:12 <monochrom> or maybe s/fine-tuning/over-fitting/
22:12 <nitrix> monochrom: To avoid overfitting, one strategy I learned recently is to randomly disable nodes temporarily.
22:12 takle joined
22:12 <nitrix> monochrom: I forces other nodes that normally doesn't contribute as much to tip in.
22:12 <nitrix> monochrom: It*
22:13 <monochrom> Hrm that's interesting.
22:13 <nitrix> Once in a while, when a node is supposed to fire, you just say no.
22:13 <nitrix> No wonder we forget things sometimes.
22:13 <nitrix> Maybe we have a similar mechanism in our brain.
22:13 <monochrom> But yeah you've got to randomly rock the boat once in a while. Humans need this too, "get outside the comfort zone to learn more"
22:14 <nitrix> Were you can temporarily not remember something to force yourself to recall other stuff to come up with your own stuff.
22:14 <nitrix> Personally, I'm sure I don't learn things verbatim. I learn the context and from learning that context I can reliably come up with the same group of conclusions.
22:14 <nitrix> It's less taxing mentally.
22:16 <nitrix> monochrom: So back to the overfitting, if a node is too important and it's not firing once in a while, the training will causes other nodes to share some of the responsability of that node.
22:16 mounty joined
22:17 <nitrix> monochrom: Just because it can only be so reliable, more nodes are needed and that alone tends introduce more entropy and avoid overfitting.
22:17 <nitrix> Anyway.
22:17 <nitrix> Going home :P
22:17 <nitrix> Fun chatter :P
22:17 <MarcelineVQ> :D
22:21 vaibhavsagar joined
22:23 takle joined
22:24 pbrant joined
22:31 hiratara joined
22:35 Ayey_ joined
22:42 systemfault joined
22:50 madjestic joined
22:53 hphuoc25 joined
23:15 systemfault joined
23:26 Ayey_ joined
23:31 louispan joined
23:40 dni- joined
23:44 cschneid_ joined
23:46 conal joined
23:51 malaclyps joined
23:56 eacameron joined