<    March 2017    >
Su Mo Tu We Th Fr Sa  
          1  2  3  4  
 5  6  7  8  9 10 11  
12 13 14 15 16 17 18  
19 20 21 22 23 24 25  
26 27 28 29 30 31
00:45 laj joined
00:49 czart joined
00:57 <kaniini> TemptorSent: seems reasonable
01:00 <TemptorSent> kaniini: Okay, I'll add that to the hit list once I'me done with my current project (refactoring the functions I need of update-kernel into mkimage and scrapping the rest of the mess I don't.
01:03 blueness joined
02:24 s33se joined
03:05 Emperor_Earth joined
04:22 mackson joined
05:16 <TemptorSent> Question - what is the use case for NOT including host-keys when building an image?
05:20 iamthemcmaster joined
05:56 czart_ joined
06:00 fabled joined
06:10 <TemptorSent> Okay, project reimplement update-kernel appears to be working so far...
06:11 <TemptorSent> Trimming of modules included in modloop should be straightforward to implement now, leading to significant image size reductions.
07:10 rnalrd joined
07:15 <TemptorSent> fabled / ncopa : Any ETA on having an extract function in apk?
07:19 <clandmeter> TemptorSent, tar
07:20 <TemptorSent> clandmeter: Not fun.
07:20 <TemptorSent> clandmeter: recursive get can't pipe to tar.
07:21 <TemptorSent> clandmeter: And we don't have a flat list of packages fetched (Downloading ...)
07:22 <TemptorSent> clandmeter: Essentially a target-tree without setting root.
07:23 <TemptorSent> clandmeter: Otherwise, the option is to download recurseive to a new output dir, iterate files in that dir and untar them.
07:25 <clandmeter> I didnt follow what you are trying to accomplish. You should check out the draft of new features/fixes from fabled for new apk-tools.
07:25 <clandmeter> i dont think fabled will put a lot of effort in current apk-tools.
07:28 <TemptorSent> clandmeter: Understandable.. I'll have to take a look and see what it would take. I hate hacking around things like this.
07:30 <TemptorSent> Piping the kernel package to tar to extract /boot is painful at best.
07:30 <clandmeter> its how i made pkgs.a.o :)
07:31 <TemptorSent> clandmeter: I'm already wasting literally hours per day waiting on things to extract (and redownload still in some cases :/)
07:33 <TemptorSent> To fix it, I have to track down every apk fetch --stdout and hack them.
07:33 <TemptorSent> And the fix is far more fragile.
07:39 fekepp joined
07:41 vakartel joined
07:44 <TemptorSent> What really kills me is that apk will just naievely fetch the file again rather than checking the checksum of an existing one..
08:00 <TemptorSent> Anyway, the update-kernel functionality is in, working on testing it now.
08:12 <tru_tru> TemptorSent: setup-apkcache && apk cache download?
08:12 <TemptorSent> tru_tru: LOL Oh, how I wish it was that simple!
08:13 <TemptorSent> tru_tru: I'm working on the image builder, which needs at least one repo of its own per arch.
08:14 <tru_tru> https://wiki.alpinelinux.org/wiki/Local_APK_cache -> "Using the Local Cache with tmpfs volumes" might be ok? one file based cache per repo/arch
08:14 <TemptorSent> tru_tru: The problem is that apk --cache-dir isn't currently doing what's expected.
08:15 <TemptorSent> tru_tru: No dice -- the problem is that apk will fetch to stdout or outdir and not cache anything in the process.
08:21 <tru_tru> even if you use the --no-network flag?
08:26 t0mmy joined
08:34 tg joined
08:41 tty` joined
08:48 <TemptorSent> tru_tru: I need it to fetch packages from the network as needed, then cache them and NOT go to the network unless I really don't have a copy.
08:56 stwa joined
08:56 leo-unglaub joined
09:17 <tru_tru> hackish -> apk --no-network add wget || apk add wget, apk del wget && apk cache -v sync --purge
09:17 <tru_tru> otoh, ymmv :P
09:34 <TemptorSent> tru_tru : The apk add isn't where I'm running into problems as much, it's the apk fetches that are really fubaring things.
09:35 <TemptorSent> tru_tru : at least once I add it to the repo, it doesn't re-fetch the next time I do something (fix, say)
09:36 <TemptorSent> tru_tru: Or worse yet, when the package happens to be in two different lists that end up getting merged at the end, but it pulls two copies in the mean time.
10:08 <tru_tru> could "apk policy XXX" help? -> apk policy wget| grep 'etc/apk/cache' || apk fetch wget (but download locally, not in /var/cache/apk)
10:22 t0mmy joined
10:23 leo-unglaub joined
10:54 <TemptorSent> ncopa : Any idea why abuild-sign insists on using 'mv -i' when signing a repository index, totally buggering debugging by piping stdout/stderr to less :)
11:05 <ncopa> do you have coreutils installed?
11:06 <ncopa> there is an mv in do_sign
11:06 <ncopa> but there are no -i
11:09 <ncopa> TemptorSent: do you know if you have GNU coreutils mv or busybox mv?
11:09 <TemptorSent> ncopa good question.... I'm assuming busybox, but lemme see.
11:09 <ncopa> which mv
11:10 <TemptorSent> Right, mv is /bin/mv --> busybox
11:12 <ncopa> oh
11:12 <ncopa> this is nasty
11:13 <ncopa> https://git.busybox.net/busybox/tree/coreutils/mv.c#n109
11:15 <ncopa> seems like it has been like that for a long time
11:16 <TemptorSent> Ouch.. hmm I guess I could try blinding abuild by sticking it in a subshell with env -i and a null redirect.
11:16 <ncopa> looks like it happens if target exists
11:17 <ncopa> weird that we havent seen this before
11:19 <ncopa> TemptorSent: i think we need to use --force in abuild-sign
11:19 <TemptorSent> Not happy-making when it pops up in the middle of a 20 minute scratch session.
11:19 <ncopa> yes, its stupid
11:21 <TemptorSent> So, any thoughts on getting apk to directly spew out the contents of apks into a directory structure without setting up a new root? MAJOR bonous points if it lets you filter the extracted files, even more points for using the cache :)
11:22 <ncopa> i assume including dependencies
11:23 <TemptorSent> Even better :)
11:24 <ncopa> TemptorSent: can you check if this solves your interactive problem? http://tpaste.us/NRmP
11:24 <ncopa> to extract individual package:
11:25 <ncopa> apk fetch --stdout --quiet $pkg | tar -zx
11:26 <TemptorSent> ncopa Yeah, not a good solution... especially when the cache isn't doing it's job.
11:26 <TemptorSent> Worse when you want /boot out of linux-grsec.
11:27 <TemptorSent> I want a means of caching EVERY package that apk touches until I tell it to go on a cleaning spree.
11:28 <TemptorSent> I'll test the mv -f later, right now I'm too tired to see straig.
11:28 <TemptorSent> Running into an irritation that's crashing mkinitfs with a spurious /path/* not found
11:30 <TemptorSent> Has something changed recently that causes globbing to fail on strings like "$var/"* vs "$var"/* works?
11:30 <skarnet> you want to rename a file? use rename, not mv.
11:30 <skarnet> busybox and coreutils both have rename.
11:30 <skarnet> avoids problems with the mv -i alias.
11:31 <TemptorSent> skarnet: It's in abuild-sign.
11:31 <skarnet> (I normally don't suggest that because it's not posix, but here it's not about posix, it's about making things work on a system we control.)
11:31 <jirutka> skarnet: rename? I’ve never heard about this command
11:31 vakartel1 joined
11:31 <TemptorSent> Anyway, I'm going to sleep, must finish debuggin when I can see.
11:32 <TemptorSent> ...and thinking straight might help too.
11:32 <skarnet> jirutka: for good reason - it's not a standard one.
11:32 <skarnet> TemptorSent: gn
11:32 <TemptorSent> G'night all.
11:32 <jirutka> skarnet: aha, that’s why it’s not on macOS/FreeBSD
11:32 <skarnet> indeed, it's a GNU thing that bb later implemented
11:32 <nmeum> re: perl and pod2man any objection to applying this patch http://sprunge.us/FIZf ?
11:32 <nmeum> ncopa: ^
11:32 <nmeum> jirutka: ^
11:33 <jirutka> skarnet: but why actually? how it’s different from mv?
11:33 <skarnet> it's just mv -f without the possibility of aliasing
11:33 <jirutka> nmeum: what, Valery haven’t fixed it yet?!
11:33 <skarnet> and, I suppose, ensuring that it's a rename() syscall
11:34 <skarnet> because mv -f will work across filesystems, and of course you can't rename() across filesystems, so in that case mv won't be atomic
11:34 <skarnet> whereas rename will just fail
11:34 <jirutka> nmeum: someone wrote him an email that he broke a lot of abuilds with his changes in perl pkg, I forgot about it then
11:35 <nmeum> jirutka: he proposed a patch but it wasn't merged so far
11:35 <nmeum> see the link in the commit description
11:35 <jirutka> aha, patchwork again >_<
11:35 <nmeum> his patch just moves a subset of these scripts back into the original package
11:35 <nmeum> I would prefer to move all of them back for now just to be sure...
11:36 <nmeum> anyways: unless you or ncopa dislike my proposed patch I would just commit it to finally fix this annoying issue
11:37 <jirutka> nmeum: I agree with you
11:37 <ncopa> hum
11:37 <ncopa> i think the intention was to make the perl runtime package as small as possible
11:37 <ncopa> nmeum: can you check the size difference?
11:38 <nmeum> sure
11:38 <jirutka> yes, the intention was good, but it breaks a lot of abuilds and no one was willing to find which and fix them
11:38 <* nmeum> is currently building perl
11:38 <ncopa> i suppose the other alternative is to try fix the abuilds as they pop up
11:39 <ncopa> vakartel: do you have any opinion on reverting the perl pod* thingy? ^^^
11:39 <nmeum> that's going to be pretty annoying. gdb for instance just had an empty man page as a result of this change it didn't output any errors during the build
11:39 <ncopa> oh
11:39 <ncopa> silent breakages
11:39 <ncopa> thats bad
11:39 <nmeum> yep
11:40 <nmeum> I would prefer the following approach:
11:40 <nmeum> 1. revert the change
11:40 <nmeum> 2. find packages which depend on pod2man
11:40 <nmeum> 3. after finding all packages include perl-dev in their makedepends and apply the change again
11:41 <ncopa> if packages breaks silently then i dont think we have any option but revert
11:42 <nmeum> here is the size difference messured with du(1):
11:42 <nmeum> 8368perl-5.24.1-r1.apk
11:43 <nmeum> 8416 packages/main/x86_64/perl-5.24.1-r2.apk
11:44 <nmeum> so this is the compressed package not the space it actually takes on disk but well…I don't believe that difference justifies the impact the change has
11:45 tmh1999 joined
11:46 <nmeum> http://sprunge.us/MeNJ y/n?
11:46 <nmeum> also reverted changes I made to fix some packages like gdb for instance
11:46 <jirutka> sry, afk
11:48 <ncopa> nmeum: dpkg reduces pkgrel, either leave pkgrel untouched or increase it
11:48 <nmeum> ok
11:48 <ncopa> we should probably alos mention silent breakages in commit message
11:49 <ncopa> which i think is the major issue
11:51 <nmeum> ok, added that the commit message
11:51 <nmeum> *to
11:53 <nmeum> any other suggestion? otherwise I will push this now
12:02 <nmeum> pushed
12:03 <ncopa> i just added a commen to ml
12:05 <^7heo> moin leute
12:06 blueness joined
12:16 leitao joined
12:17 ferseiti joined
12:20 blueness joined
12:35 vakartel joined
12:37 farosas_ joined
12:38 stwa joined
12:39 farosas_ joined
12:55 tty` joined
13:01 ferseiti joined
13:02 leitao joined
13:30 ferseiti joined
13:46 stwa joined
14:38 ferseiti joined
15:07 leitao joined
15:35 <pickfire> Have anyone used any video conferencing tools on alpine?
15:36 <pickfire> I tried packaging skype and hangouts but both needs glibc.
15:47 <duncaen> ncopa: the chromium 57 update is segfaulting again because of the pthread stacksize we use the same as glibc now, not sure how much is really necessary https://github.com/Duncaen/void-packages/blob/75be4272f602d26e9c7b3163d8bda6ca71f58535/srcpkgs/chromium/files/musl-patches/default-pthread-stacksize.patch
16:05 cyteen joined
16:35 leitao joined
16:36 vakartel joined
16:37 vaka joined
16:53 vidr joined
16:54 <ncopa> duncaen: are you sure its due to thread stacksize?
16:57 BitL0G1c joined
16:59 <duncaen> its not the GetDefaultThreadStackSize size, i think 2mb is still god there
17:00 <duncaen> the additional changes to kShutdownDetectorThreadStackSize fixed it
17:01 <duncaen> we tried a lot of builds, ~10 hours to find a working solution
17:02 <duncaen> somehow our gdb stops at the wrong place and says that it received an unknown signal, not sure if this is related to chromium
17:02 alacerda joined
17:04 <duncaen> with a coredump we could see that the failing instruction moves something into a non writable area
17:06 <kaniini> ncopa: http://github.com/kaniini/apk-gtk
17:09 <duncaen> the crashing thread uses kShutdownDetectorThreadStackSize/PTHREAD_STACK_MIN which is just 2048 with musl, instead of the stacksize returned by GetDefaultThreadStackSize
17:13 ferseiti joined
17:15 <ncopa> kaniini: nice ::)
17:15 <ncopa> duncaen: good work, i will look at that when i upgrade for alpine
17:17 <duncaen> i updated a few other patches, some code moved from webkit to base, this release fixes many cves
17:18 <duncaen> and gold linking with gold seems to be broken again, at least on void
17:30 fabled joined
17:33 ferseiti joined
17:47 BitL0G1c joined
17:49 leitao joined
17:50 leo-unglaub joined
17:50 leitao joined
17:55 gk-- joined
18:40 blueness joined
19:06 <TemptorSent> Anyone know wtf the firmware
19:07 <TemptorSent> 'carl9170fw' is for?
19:09 <TemptorSent> It has a full source tree included in the kernel-grsec package -- is that intentional, or did something break in the kernel build spewing it?
19:09 <TemptorSent> sorry, firmware package.
20:04 <TemptorSent> fabled/ncopa: Proposal for apk that may solve many problems when scripting: Add flags to echo the bare package-name/filename/output-file-path for fetch to allow a construct like:
20:08 <TemptorSent> apk --list-full-path fetch -R $pkgs | xargs -N 1 tar -C "$dest" -xzf
20:23 mikeee_ joined
20:53 tmh1999 joined
21:08 mikeee_ joined
21:24 blueness joined
21:24 leo-unglaub joined
21:26 <leo-unglaub> what would be the recommended way to store (redundand) 60TB of data?
21:26 <leo-unglaub> i am thinking of a raid10, but i am not a mass storage expert
21:29 <TemptorSent> leo-unglaub - It largely depends on your access patterns. Personally, I'd use zfs raid-z2 or -z3 with a big fronting cache.
21:30 <TemptorSent> leo-unglaub: But thats with my typical usage scenarious, yours may vary greatly.
21:30 <leo-unglaub> so you would let the fs do all the mirroring logic, ... ?
21:31 <TemptorSent> leo-unglaub: Definitely -- at that point, it's the only way you can sanely handle write gaps and possible silent data corruption.
21:32 <leo-unglaub> hmmm, thats what i thought .... because the kernel internal tools for raids suck very hard at this huge level
21:32 <TemptorSent> leo-unglaub: At 60TB, chances are you'll see drives failing more during restripe.
21:32 <leo-unglaub> i am calculating a disc failure every 4 month
21:33 <leo-unglaub> does this sound reassonable?
21:33 <TemptorSent> leo-unglaub: It depends on what the data and access patterns look like, esp hot / warm / cold data access times.
21:33 <TemptorSent> leo-unglaub: In my experience, you tend to have them go in batches unless you've done a good job of distributing your devices across manufacturing periods.
21:34 <leo-unglaub> very interresting point! never thought about it that way
21:34 <TemptorSent> leo-unglaub: I'd probably try for different vendor or at least diferent batch disks in my primary/backup.
21:34 <TemptorSent> See the 'click-of-death'
21:35 blueness joined
21:36 <TemptorSent> Whatever you do, when you're looking at 60TB of data, you'll want to distribute it across multiple redundant nodes if possible.
21:37 <TemptorSent> Your restoration times from backup are prohibitive to say the least, so you really want to be able to hot-fail, or at least warm-fail to another running machine.
21:38 <TemptorSent> ZFS has a few inflexiblities in terms of pool upgrades, but it's definitely well tested and designed for storing large volumes of data safely.
21:39 <TemptorSent> The next step up would be a clustering FS or object store, which may be appropriate, depending on your application.
21:41 <tmh1999> leo-unglaub : how do you calculate disc failure ?
21:41 <leo-unglaub> tmh1999: statistics from the last 10 years in our datacenter
21:42 <leo-unglaub> TemptorSent hmmm, good pointt
21:42 <TemptorSent> RAID 10 requires 4x required storage, and call it 50% overhead, that's 4x120TB or 480TB disk required and can still fail with certain pairs of disks going at once.
21:45 <leo-unglaub> yeah ... hmmm
21:46 <leo-unglaub> all this storage stuff sucks ...its clearly not my field of expertise ... ususally i am the guy who works in the stuff that runs in memory ... as soon as the data is on the disc its not my problem anymore ... but that changed today ...
21:46 <TemptorSent> RAId Z-3 requires N+3 DRIVES to provide safe 2 disk failure protection (and not so safe beyond that) -- so you can do pools of 12 10TB data + 3 10TB for parity + 1 10TB hot spare and require 160TB of drives for the same or better protection.
21:47 <TemptorSent> The tradeoff is speed, but some tuned caching and extra memory can usually compensate unless you have very specific workloads.
21:48 <leo-unglaub> memory is not an issue ... i have 512 GB in every node
21:48 <TemptorSent> So what I would do is run two mirrored servers with a 160TB arrays in each and load them up with memory/inifiniband with the extra money.
21:49 <TemptorSent> leo-unglaub : Okay, you might be talking the realm of needing a high performance clustering solution... what kind of workload are you dealing with?
21:49 blueness joined
21:50 <TemptorSent> If you're pushing big bandwidth as well, you're getting out of my realm of daily experience and into the HPC crowd.
21:51 <leo-unglaub> its just for emails
21:51 <leo-unglaub> maildir storge format
21:53 <TemptorSent> leo-unglaub: Then ZFS would probably be good for your needs with some tuning, or possibly something lighter and a heavy backup program.
21:53 blueness joined
21:54 <TemptorSent> Do some research on zfs, your average filesize, and your transaction rate and compare it to xfs or btrfs or even gluster.
21:56 <leo-unglaub> i will, thanks for the help
21:56 <leo-unglaub> brb
22:05 blueness joined
22:10 blueness joined
22:11 cyteen joined
22:16 leitao joined
22:21 <clandmeter> kaniini, nice :)
22:29 blueness joined
22:33 blueness joined
23:07 <jirutka> leo-unglaub: if you need to store 60 TiB of data, forget about RAID, you need some distributed FS with redundancy, like Ceph and similar
23:36 <TemptorSent> I have an odd one here -- why would mkinitfs be puking when invoked on its own (appers to be giving lddtree bogus args, making cpio vomit), but run perfectly when run under sh -x /sbin/mkinitfs?