Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev2&y=22&m=5&d=16

Monday, 16 May 2022

00:00:00 <heat> might just be a weird niche case
00:00:00 <heat> like most of the nvme spec has, really
00:01:00 <heat> a lot of weird stuff going on because they don't require PCIe
00:01:00 <heat> like you have two separate scatter gather list types, and they're not supported on all commands, and one can't be used in nvme-over-fabrics
00:02:00 <heat> they probably went for lots of weird "haha IOPS goes brrrrrrrrrrrr" optimisations
00:03:00 <heat> (also, because of this, the linux driver is way more complex since it needs to support fabrics and PCIe)
00:03:00 <mrvn> you could just ignore one list, right?
00:03:00 <zid> where's the bit of the spec that makes it dma straight to my gpu for texture uploads
00:04:00 <mrvn> ahh, that's why there are 2 lists. :)
00:04:00 <heat> no
00:04:00 <heat> "PRPs shall be used for all Admin commands for NVMe over PCIe implementations. SGLs shall be used for all Admin and I/O commands for NVMe over Fabrics implementations (i.e., this field set to 01b). An NVMe Transport may support only specific values (refer to the applicable NVMe Transport binding specification for details)."
00:19:00 <heat> i've just noticed the ambiguity in the PRP statement - you can also use SGLs over PCIe for IO commands
00:42:00 <wxwisiasdf> hi
00:43:00 <mrvn> wo?
00:44:00 <gog> hi
00:45:00 <heat> hell
00:45:00 <heat> o
00:46:00 <mrvn> hell is other people -- Rugnetta
00:47:00 <gog> good thing i'm a cat
00:48:00 <heat> killall cat
00:49:00 <wxwisiasdf> delete cat
00:49:00 <wxwisiasdf> wait no if i delete cat my *nix dies :(
00:54:00 <mrvn> apt-get install dog
00:55:00 <heat> tac | tac
00:56:00 <wxwisiasdf> you use apt-get? nooo, the cool people nowdays use conan
00:56:00 <heat> the cool people use apt
00:56:00 <heat> apt install > apt-get install
00:56:00 <heat> actually, sorry
00:56:00 <heat> the cool people use pacman
00:56:00 <mrvn> ..."Have you mooed today?"...
00:56:00 <wxwisiasdf> pip install > apt install
00:56:00 <heat> I USE ARCH
00:56:00 <mrvn> cool people play pacman
00:56:00 <heat> BTW
00:56:00 <wxwisiasdf> oh no they use arch
00:57:00 <wxwisiasdf> mrvn: what about... pong
00:57:00 <wxwisiasdf> there is love for pacman, but no love for pong? :(
00:58:00 <mrvn> you aren't cool enough for pong
00:59:00 <wxwisiasdf> what about the cubic entertainment box
00:59:00 <wxwisiasdf> more often called, "that blocky wobbly rectangle pc thingy"
01:01:00 <heat> ping is better
01:07:00 <mrvn> the not-idiot-box?
01:39:00 <gog> i use "arch"
01:40:00 <zid> how do you know if someone uses arch
01:40:00 <gog> they don't shut the fuck up about it
01:40:00 <zid> :D
01:42:00 <gog> i mean linux is linux
01:42:00 <gog> i've just been using arch-based for so long i'm in a sort of inertia with it
01:42:00 <zid> yea I'm fine with gentoo and just cbf to not use it
01:42:00 <zid> I picked it because it suited my needs and they haven't really changed so why bother
01:43:00 <gog> i did enjoy the deep customizability of gentoo
01:43:00 <gog> but not the compiling
01:44:00 <zid> The compiling used to be a non-issue for me even on a VM, but with my LTO addiction it's starting to become one :P
01:45:00 <gog> is LTO the new -funroll-loops?
01:45:00 <zid> LTO is -O3 for your -O3
01:45:00 <zid> -Ocubed
01:45:00 <gog> -O9 i see
01:46:00 <gog> or would that be 27??
01:46:00 <zid> Actually 333,333,333
01:47:00 <zid> It does use about a million times as much RAM to compile!
01:47:00 <gog> threes all around
01:47:00 <gog> i'll drink to that
01:47:00 <gog> it might be possible that ill give gentoo another go
01:47:00 <zid> You should test your drink for poison first
01:47:00 <zid> have two
01:48:00 <gog> i got a job and i've been working evenings and my paycheck is gonna be pretty good
01:48:00 <zid> nice, what are you buying me?
01:48:00 <gog> a computer
01:48:00 <zid> I've got one thanks, need a new monitor though
01:48:00 <gog> its for you in the sense that you will no longer have to tolerate my whinging about my shitty netbook
01:48:00 <zid> I think the OSD chip and/or powersupply are going
01:49:00 <heat> arch based is just like using linux mint
01:49:00 <heat> arch linux manually installed = chad move
01:49:00 <gog> arch cringe amirite
01:49:00 <zid> if you show it the wrong shade of certain colours you get random 16 pixel wide lines everywhere
01:49:00 <zid> and it likes to crash
01:49:00 <gog> that's a bizarre problem for a monitor to have
01:49:00 <zid> I think it's the OSD chip overlaying garbage
01:49:00 <gog> yeh
01:49:00 <zid> because of certain other issues I've seen during the crashes
01:50:00 <zid> I'm not sure if the logic board is dying, or if the psu is just dying and it's making it flake out though
01:51:00 <zid> I'd take it apart and recap it or whatever, but I don't fancy taking it apart once, let alone twice
01:51:00 <zid> broken plastic clips and annoying amounts of hotglue are surely down that path
01:51:00 <gog> maybe worth a try to fix if/when you replace it
01:51:00 <zid> yea that was the plan
01:51:00 <zid> get a new monitor, THEN open it
01:51:00 <gog> yes
01:51:00 <zid> but I refuse to buy a worse monitor
01:51:00 <zid> when a better monitor is only twice the price ;)
01:52:00 <zid> so I am in partially broken monitor limbo
01:52:00 <gog> you don't have one sitting you can bring out of retirement for the meantime?
01:52:00 <zid> 1440p 144Hz is down to like £249 for a weird brand panel
01:52:00 <zid> I have a 1280x1024 CRT on my desk that I can't drive unless I swap my 1050ti for my 670 with a bad vrm
01:52:00 <heat> 1440p is always crazy expensive
01:52:00 <zid> this is 2048x1152
01:53:00 <gog> why can't your card run it?
01:53:00 <zid> I bought it because it was literally the only thing >1080p at the time that wasn't the £3000 apple cinema pro display thing
01:53:00 <zid> >980ti means no RAMDAC
01:53:00 <zid> I'd need a £100 hdfury
01:53:00 <gog> ohhh
01:54:00 <zid> or an ossc or something, anyway
01:55:00 <heat> you should install arch linux
01:56:00 <heat> might fix it
01:56:00 <zid> You might be able to install arch on the osd chip
01:56:00 <gog> the big q i have is if i should continue to use linux only when i get my new rig
01:56:00 <zid> it's a standard part basically all monitors use
01:56:00 <gog> think it'll work even for games
01:56:00 <zid> and there was that defcon/blackhat/cccc or whatever talk about hacking them
01:56:00 <gog> proton seems fine enough for what i want to play
01:56:00 <zid> get two graphics cards
01:56:00 <zid> passthrough one to a VM
01:56:00 <heat> i'll always go for windows as well
01:56:00 <heat> realistically, linux gaming is still crap
01:57:00 <gog> the one i'm looking at is a ryzen 5 with a discrete nvidia
01:57:00 <zid> farmville won't know what hit it
01:57:00 <heat> laptop?
01:57:00 <gog> yeah
01:57:00 <zid> oh an laptop
01:57:00 <heat> doesn't it come with windows?
01:57:00 <gog> yeah it does
01:57:00 <heat> ok gr8 problem solved
01:57:00 <heat> ez
01:57:00 <zid> I'd go with windows with vmware linux vm then
01:57:00 <heat> i dual boot
01:57:00 <zid> but downgrade the windows to xp64 so that it's actually nice to use
01:58:00 <gog> nah 10 with all the crap cleaned out is fine
01:58:00 <gog> 11 is an absolute no-go for me though
01:58:00 <heat> 11 is better
01:58:00 <gog> idk i don't like it
01:58:00 <heat> its like windows 10
01:58:00 <heat> but slightly different
01:58:00 <zid> 11 just seems identical to 10 for me, but with a TPM being available for ad tracking and drm as a base feature
01:58:00 <zid> I think that's literally the only reason w11 exists
01:59:00 <heat> what does a TPM have to do with ad tracking
01:59:00 <zid> hardware id
01:59:00 <gog> maybe i'll just use linux only
01:59:00 <heat> you already have like 1 or 2 mac addresses
01:59:00 <zid> If they *don't* offer that as a feature to customers they're missing a trick
02:00:00 <heat> gog, what are you planning to run anyway?
02:00:00 <gog> nothing terribly new
02:00:00 <zid> like how facebook integration on website is primarily so they can correlate your other accounts to your facebook account, this lets them also tie in your windows login
02:00:00 <zid> with it being an 'online' account you will have to port it to new hardwre and they will never ever lose you
02:00:00 <heat> gog, like what?
02:00:00 <heat> discrete gpus are usually not great
02:01:00 * heat cries in laptop
02:01:00 <zid> integrated graphics is like.. fine
02:01:00 <gog> new vegas, maybe elden ring
02:01:00 <zid> it runs overwatch these days
02:01:00 <zid> elden ring isn't even that playable on a 2080
02:01:00 <heat> hm? last i checked it struggled to run csgo
02:01:00 <gog> idk if i'll even play it
02:02:00 <gog> honestly i could probably run new vegas on the 4650's onboard
02:02:00 <gog> that's my backup plan if the one i have my eye on gets got, it's a floor model and about 25% discount
02:02:00 <heat> you should play dark souls, the second best video game ever
02:02:00 <gog> its sibling is slighly less good with half RAM/storage
02:02:00 <zid> I've already got all the achievements though :(
02:02:00 <gog> but no discounted model
02:04:00 <gog> i've shopped around and buying from amazon is about the same after import fees and VAT
02:04:00 <gog> for equivalent specs
02:05:00 <zid> https://www.videocardbenchmark.net/gpu.php?gpu=Intel+UHD+770&id=4473
02:05:00 <bslsk05> ​www.videocardbenchmark.net: PassMark - Intel UHD 770 - Price performance comparison
02:05:00 <zid> https://www.videocardbenchmark.net/gpu.php?gpu=GeForce%20GTX%201050%20Ti&id=3595
02:05:00 <bslsk05> ​www.videocardbenchmark.net: PassMark - GeForce GTX 1050 Ti - Price performance comparison
02:06:00 <zid> It's honestly not bad at all for being free and inside your cpu
02:07:00 <heat> that's a desktop gpu
02:07:00 <zid> that's the point of the comparison, yes
02:07:00 <zid> UHD 770 vs a real gpu
02:07:00 <zid> it's about half as good
02:07:00 <heat> no, the uhd 770 is only on desktop cpus
02:07:00 <zid> yes it's the new one
02:08:00 <zid> (I think at various points though laptop integrated may have been ahead, because 7nm+++++ was mobile only and stuff)
02:12:00 <heat> amd apus used to be way ahead of intel integrated GPUs
02:12:00 <heat> dunno if that's still the case
02:12:00 <zid> yea I never followed all this
02:13:00 <zid> I just knew vaguely they were "fairly" good, and it turns out that means about half as good as a real normal gpu
02:13:00 <zid> not a fancy new one
02:41:00 <mrvn> new Monitor[4];
06:46:00 <geist> aso ugh. i thought my server was stable, but nope. crashed again after 3 or 4 days
06:47:00 <geist> at this point it has to be a cpu failure or a vreg failure or something
06:47:00 <ddevault> ayy I woke up with an idea as for why I couldn't get to userspace and lo and behold now I have userspace working
06:47:00 <Mutabah> noice!
06:48:00 <geist> yay
06:51:00 <kazinsal> :toot:
06:52:00 <gamozo> woo
06:52:00 <kazinsal> geist: godspeed you! fellow vaguely broken server emperor
06:53:00 <kazinsal> my old E5-2650 box is... starting to do really weird stuff. I think the CMOS chip may be failing as no matter how many new batteries I feed it loses its CMOS config with even a few seconds of power loss
06:53:00 <kazinsal> POST is starting to take longer and longer as well, which is a problem because being sourced from a 1U server, it has a distressingly long POST
06:54:00 <kazinsal> it's nearing the point where I'm thinking about just getting whatever a 5800X and matching mobo and RAM to replace it and eating the hit to my savings account haha
06:55:00 <kazinsal> power here can be a bit messy in the winter and spring because a lot of the power distribution grid has a bunch of chokepoints that would have been acceptable to go down when there was only 20,000 people living here but becomes a major disruption when there's 150,000...
07:04:00 <klys> suggesting the dual socket option
07:07:00 <klys> https://www.ebay.com/itm/234456087004 I don't think the regular ryzen models can do this though epyc is similar (ddr4 not ddr5 tho) some cpus support tandem work, some don't
07:07:00 <bslsk05> ​www.ebay.com: Supermicro Motherboard MBD-H12DSI-N6-B SoC AMD EPYC7003/7002 SP3 4TB DDR4 672042446001 | eBay
07:08:00 <kazinsal> honestly I could solve a lot of my power related problems with a cheap UPS but it'd be nice to have some cores that were designed in this decade
07:08:00 <klys> oh forget that it was the wrong item (needs 7003 support for good spu compatibility)
07:08:00 <klys> s/spu/cpu/
07:08:00 <kazinsal> the computational throughput of a 2.0 GHz sandy bridge core is kind of sad in comparison to a fully clocked 5800X or 11900K
07:10:00 <klys> oh actually that was the right item and I can't read because it's late
08:30:00 <lg> qemu with -m raspi0 doesn't seem to have given me atags data, r0-2 are all 0 when it boots into the kernel stub
08:43:00 <mrvn> lg: ATAGS are kind of dead. Device Tree is just better and used on basically all ARMs.
08:44:00 <mrvn> Not sure why rapsi0 wouldn't have a DT. The other raspi models have them iirc.
08:46:00 <mrvn> kazinsal: I want an UPS that has an ATX output. It's stupid to boost the battery power back to 230V AC just to have the PSU get it back to 12/5V DC.
08:47:00 <mrvn> Basically same design as every laptop.
11:58:00 <lg> mrvn: does the rpi bootloader put that together or is it left as an exercise for the reader?
11:59:00 <mrvn> it's put together and modified depending on the options and overlays in the config on your boot partition
15:00:00 <mrvn> How do I write a template<typename T [[gcc:packed]]>?
16:02:00 <Bitweasil> lg, the dtb files are present, and you can find the dts source files in the Rpi kernel tree. Or just use 'dtc' to disassemble the running ones.
16:02:00 <Bitweasil> But the bootloader does a few things, and will add overlays/etc depending on config.txt, so what's passed to the kernel isn't exactly what's on disk.
16:02:00 <Bitweasil> I think it also adds things like the RAM size/range/etc.
16:03:00 <mrvn> You can't trust the dtc output of the running one in linux. It corrupts it somewhat.
16:07:00 <Bitweasil> The one in /sys/firmware or such?
16:07:00 <Bitweasil> There's a 'file' input type to dtc to handle that.
16:09:00 <Bitweasil> I guess 'fs'
16:20:00 <mrvn> Bitweasil: not sure if it was sys or proc but it drops the cell/address sizes at the wrong places.
16:21:00 <mrvn> I think the virtual FS code for it isn't design to handle cell/address sizes switching between 0, 1 and 2 all the time like the RPi DT does.
16:21:00 <Bitweasil> Oh, huh. Alright, didn't know that was an issue! TIL!
19:09:00 <geist> oh hmm that might make sense
19:09:00 <geist> because maybe it re-orders the nodes when it flattens it into the fs
19:09:00 <geist> so you dont get precisely the order i the FTD
19:09:00 <geist> FDT
19:15:00 <mrvn> the tree structure is quite recognizable, it's just not like the original input
19:19:00 <geist> yah i say that because i have taken to tarring up the /proc/device_tree trees in te past as a nice way to consume them
19:20:00 <geist> saving a copy when working on a device, etc
19:22:00 <mrvn> I tried using that as test data for my DT parser and couldn't figure out why it bombed till I compared it to the one from the boot partition.
19:25:00 <geist> good to know
19:26:00 <geist> in that case you're saying even the FDT you read out of /proc or /sys is already munged?
19:26:00 <mrvn> might be specific to the RPi because how it remaps the devices from 32bit peripherals to 64bit arm addresses to 32bit VC addresses
19:27:00 <geist> yah makes sense
19:27:00 <geist> i duno what the spec says either, does it say the cell/address/interrupt sizes field be always at the start of the node?
19:28:00 <mrvn> Van't remember if I checked the raw data or just the dtc output.
19:28:00 <mrvn> every node can have a cell/address field afaik.
19:28:00 <clever> one sec
19:29:00 <clever> /sys/firmware/fdt has the raw fdt that linux received as input
19:30:00 <clever> and when you run dtc on it, the nodes are returned without any re-ordering (enless you add -s to sort)
19:30:00 <mrvn> can you diff that against the one on the boot disk?
19:31:00 <clever> i also prefer using -s when doing diffs, so order changes dont muck up the diff, but if you want to know about order changes...
21:26:00 <heat> yo geist do you know why the nvme spec looks so insanely optimised?
21:26:00 <heat> i feel like they just went for the max IOPS they could but you may know better as an Industry Insider(tm)
21:26:00 <mrvn> to get 32GB/s?
21:27:00 <heat> IOPS, not bandwidth
21:28:00 <mrvn> do you know how many IOPS you need for 32GB/s?
21:28:00 <heat> something I finally figured out today - PRP entries only cover a single page, SGL entries cover a phys address + length (like a regular vectored IO thing); it's unclear why anyone would prefer the PRP
21:28:00 <heat> nope
21:28:00 <mrvn> 8388608 IPOS/s for the PRP entries
21:29:00 <mrvn> Even if SGL what do you think the average length would be?
21:29:00 <heat> depends on the size of the IO
21:30:00 <mrvn> ever benchmarked that in your kernel?
21:31:00 <heat> https://i.imgur.com/mRdijUh.png /me thinks they really just went for IOPS optimisations both in-hardware and in-driver
21:31:00 <mrvn> I think linux does 128kb (32 pages) read-aheads.
21:31:00 <heat> no, I don't really do IO larger than a page for now
21:31:00 <mrvn> And for DBs you probably get 4k access a lot of the time.
21:32:00 <clever> some engines like sqlite and innodb, allow you to set a page size, and everything operates in units of that
21:32:00 <mrvn> that better be the same as the ZFS block size :)
21:32:00 <clever> for sqlite, the journal is just saving a backup copy of the entire page before its modified in-place
21:33:00 <clever> and a commit is just throwing out the backup, while a rollback is restoring it
21:33:00 <clever> when not using the write-ahead-log feature
21:33:00 <clever> mrvn: nothing currently goes out of its way to make sqlite and zfs line up automatically, the user must choose to
21:34:00 <geist> heat: yeah right?
21:34:00 <mrvn> clever: yes, and you better do.
21:34:00 <geist> also it seems to be designed to be virtualized well too. they really go out of their way to reduce the number of trapped things (ie, accesses to registers)
21:35:00 <geist> but yeah what i dont copletekly understand is what implementations implement. lots of it is optional, so there's probably some baseline that everything is expected to do
21:36:00 <mrvn> even the pins on the connector have optional bits.
21:37:00 <mrvn> well, different notches and all
21:37:00 <geist> heat: hmm, so what does that linux doc link imply? they actually use PRPs or SGLs?
21:37:00 <geist> can a PRP do >4K linear run? may be some advantages to that in some situations
21:39:00 <mrvn> "PRP entries only cover a single page,"
21:39:00 <heat> geist, it seems they use SGLs since they map better to bio_vec
21:40:00 <heat> https://www.snia.org/sites/default/files/SDC/2019/presentations/NVMe/Hellwig_Christoph_Linux_NVMe_and_Block_Layer_Status_Update.pdf and https://www.snia.org/sites/default/files/SDC/2017/presentations/NVMe/Hellwig_Christoph_Past_and_Present_of_the_Linux_NVMe_Driver.pdf are decent reads
21:40:00 <geist> mrvn: maybe they meant a single segment. geez you're being kinda picky today
21:41:00 <mrvn> or maybe a page could be 2MB? ask heat
21:41:00 <geist> mrvn exactly
21:41:00 <geist> anyway yeah linux directly mapping scatter ist to SGLs make sense to me, it's generally what i'd expect for doing IO into a block cache
21:42:00 <heat> PRP entries represent nvme memory pages
21:42:00 <heat> which may or may not be 4K (but usually are, for instance fuchsia's driver errors out if the controller doesn't support the native page size)
21:42:00 <geist> so it is kidna interesting then what the exact purpose of the PRPs are in this case. i suppose one could just issue a crap ton of transfers, breaking everything into 4K
21:42:00 <heat> that's just something you configure
21:43:00 <mrvn> It used to be scatter gather was to collect lots of little 4K IOPS into a single big read or write.
21:43:00 <geist> and except for very high end, since queing transfers on nvme is easy that might make a lot of sense
21:43:00 <heat> also, more weirdness (dunno if you read it yesterday)
21:43:00 <heat> <heat> "PRPs shall be used for all Admin commands for NVMe over PCIe implementations. SGLs shall be used for all Admin and I/O commands for NVMe over Fabrics implementations (i.e., this field set to 01b). An NVMe Transport may support only specific values (refer to the applicable NVMe Transport binding specification for details)."
21:43:00 <mrvn> now we just pass the io vecs to the hardware
21:43:00 <geist> hmm, what does that mean heat?
21:43:00 <mrvn> heat: are there any admin commands larger than a page?
21:44:00 <heat> geist: so, you have admin commands (nvme namespacing, identify, etc) and IO commands (read, write, compare, etc)
21:45:00 <heat> for whatever reason, you can't use SGLs for admin commands in NVMe over PCIe
21:45:00 <geist> hmm, interesting okay
21:45:00 <geist> so it implies that PRPs are *always* implemented, at least
21:45:00 <geist> does a base implementation have to implement both?
21:45:00 <heat> except for fabrics
21:45:00 * geist nods
21:46:00 <heat> it looks like a PCIe one does? but I'm not too sure
21:46:00 <geist> yah wondering if a base implementation can get away with just PRPs
21:47:00 <geist> and if so is it pretty efficient or is that just silly
21:47:00 <geist> sice you can queue and retire commands pretty easily, i'm thinking PRP only impls is probably pretty good
21:53:00 <geist> hmm are there restrictions on the native nvme page size? i assume the minimum sector size is 512, but is the max 4k?
21:54:00 <heat> minimum 4K max 128MB IIRC
21:54:00 <geist> i have somewhere a i think WD Blue nvme that can and was reformatted (by me) to be 4K sectors
21:54:00 <heat> but the controller can further restrict that range
21:54:00 <geist> hmm, that's page at a transfer level, but sectors are different?
21:54:00 <heat> /shrug
21:54:00 <geist> since you can definitely have 512 byte sectors still, all my samsguns are 512
21:54:00 <mrvn> native or emulated?
21:55:00 <heat> i bet these things don't emulate 512 byte sectors
21:55:00 <mrvn> that would require 8 times more descriptors.
21:55:00 <geist> i only say that because it explicitly was a thing i had to reformat using the nvme-cli utilities
21:55:00 <clever> ive seen exactly 1 rpi user, that had an issue with booting from 4k usb (sata i think) drives
21:55:00 <heat> mrvn, no it wouldn't, page != sector
21:55:00 <geist> if you look at the description of teh dvice using `nvme list` it tells you the sizes
21:55:00 <clever> (the rpi firmware doesnt support it)
21:55:00 <mrvn> What's the erasure size on NVMEs?
21:56:00 <clever> but, he found an XP compatability util for his hdd
21:56:00 <clever> which forced the drive to go into 512 emulation mode
21:56:00 <clever> and then things worked fine
21:56:00 <geist> yah as far as i've found, consumer level samsgun devices all only support 512. you can't reformat
21:56:00 <geist> but the WD ones do let you do 4K
21:56:00 <geist> but thats at the sector level, and thus presumably sets the minimum transfer block size
21:56:00 <heat> what does reformatting involve? deleting all namespaces and creating a new one?
21:57:00 <clever> this is the only time ive ever seen somebody successfuly change the emulation size
21:57:00 <geist> basically
21:57:00 <heat> btw something cute I found out about: nvme has fused commands
21:57:00 <geist> the nvme command can do it if the firmware lets you. i think you reformat at the namespace granularity, but i dunno if devices that actually support more than one namespace let you mix it
21:58:00 <heat> you can fuse a compare command and a write command to get compare-and-write behavior
21:58:00 <clever> :O
21:58:00 <clever> disk block compare??
21:58:00 <heat> yup
21:58:00 <clever> i could almost see that being useful for lock-less sharing of an fs
21:58:00 <geist> so basically a conditional write i guess?
21:58:00 <heat> yes
21:59:00 <geist> for that example
21:59:00 <clever> if you fail the race, the write doesnt happen
21:59:00 <geist> kidna interesting, though i dont know precisely what that would accomplish except an optimization
21:59:00 <geist> though i guess if it returns an error code based on what it dooes
21:59:00 <geist> it'd tell you if the thing it replaced was different or not
21:59:00 <geist> that might be helpful
22:02:00 <geist> or if it had the ability to return the old data maybe
22:02:00 <geist> but seems like that'd be a specialied command that did a read/write
22:02:00 <mrvn> you mean the new data, right?
22:02:00 <clever> the only thing that comes to mind in terms of flexibility, is the 1541 from the c64 era
22:02:00 <heat> you get the status from the command telling you if it failed
22:02:00 <heat> it works like a cmpxchg as far as I can see
22:03:00 <clever> where you could read/write blocks to/from the disk drives ram, transfer them to/from the pc, and execute code on the drive itself
22:03:00 <heat> well, except the xchg
22:03:00 <mrvn> clever: well, the floppy was another 6502
22:03:00 <heat> cmpxwrite
22:03:00 <clever> mrvn: exactly, you can just program it freely, to do whatever you want
22:03:00 <mrvn> clever: it still took them like 40 years to optimize the firmware to decode the 8to11 encoding at the disk speed.
22:03:00 <geist> but anyway nvme is the bees knees
22:04:00 <heat> this is also specified as an atomic operation within certain transfer sizes haha
22:04:00 <heat> disk mutexes go brrrrrrrrrrrr
22:08:00 * mrvn throws a lockguard at heat
22:09:00 <heat> but you define it wrong and get most vexing parse'd
22:10:00 <mrvn> heat: forgot to give the guard name, did you?
22:10:00 <mrvn> +a
22:11:00 <mrvn> *kneel* Loackguard i dub thee Loki.
22:14:00 <geist> zod
22:25:00 * Griwes throws almost always auto and ctad at heat
22:27:00 <heat> auto lck = std::lock_guard(mtx);
22:27:00 <heat> this looks horrific
22:27:00 <mrvn> std::lock_guard lck(mtx);
22:27:00 <heat> i know it's literally the same codegen, but damn it looks bad
22:27:00 <mrvn> or better {mtx}
22:28:00 <klange> with lock():
22:28:00 <mrvn> I'm missing that
22:28:00 <wxwisiasdf> hi, so basically how do i force autoconf to force gcc to force programs to sue shared libraries
22:29:00 <wxwisiasdf> instead of gcc straight up putting the entire custom libc into the program
22:29:00 <mrvn> you can #define a WIDTH(mtx) { ... }
22:29:00 <klange> you don't, because libtool gets in the way
22:29:00 <mrvn> -D
22:29:00 <wxwisiasdf> klange: uh oh
22:29:00 <mrvn> wxwisiasdf: libc isn't put into the binary at all normaly
22:29:00 <heat> you have to find and patch libtool.m4 and then autoreconf
22:29:00 <wxwisiasdf> i mean nobody wants a 100KB /usr/bin/cat right
22:29:00 <wxwisiasdf> mrvn: my autoconf seems to do so apparently
22:30:00 <heat> it's because your libc is poorly designed
22:30:00 <klange> damn right -rwxrwxr-x 1 root root 5.8K May 17 07:30 /bin/cat
22:30:00 <heat> 100KB of static libc for cat isn't normal
22:30:00 <wxwisiasdf> yeah it isn;t
22:30:00 <mrvn> wxwisiasdf: cat /usr/lib/x86_64-linux-gnu/libc.so
22:30:00 <wxwisiasdf> no it's a custom libc
22:31:00 <wxwisiasdf> the sys libc is fine
22:31:00 <mrvn> maybe it only has a static lib then
22:31:00 <wxwisiasdf> i am cross compiling for s390 on a x86 host, and i told libtools to make libc.la, still it links it as a static lib
22:31:00 <klange> okay, so you shouldn't need any las for libc
22:31:00 <mrvn> wxwisiasdf: what do you thing the a in .la stands for?
22:32:00 <wxwisiasdf> oh
22:32:00 <klange> your cross-compiler should ✓ Just Do It™
22:32:00 <mrvn> just install /usr/lib/s390-linux-gnu/libs.so
22:32:00 <klange> mrvn: 'la' is a 'libtool archive' and it just says where to find the files, it does not imply linking them statically
22:32:00 <heat> autotools is ❌ Horrifying™
22:33:00 <wxwisiasdf> i should have used meson :/
22:33:00 <heat> do it
22:34:00 <klange> if you are doing something yourself, don't ever use autotools; autotools is something you throw at a project so you can dump it on linux distro maintainers
22:34:00 <heat> my experiences with meson have been very positive
22:34:00 <heat> except for that one time where I tried to generate a crt1.o and I couldn't because meson doesn't support making bare object files without horrible hacks
22:34:00 <clever> klange: what about cmake? the only time ive used that was to deal with windows+linux+mac builds
22:34:00 <wxwisiasdf> uhhhh
22:35:00 <heat> clever, cmake is like 4x better than autotools
22:35:00 <wxwisiasdf> my os is basically the kernel + libc + 2 unrelated libs + 8 random programs
22:35:00 <wxwisiasdf> so uh, will meson do it for that?
22:36:00 <heat> probably
22:36:00 <klange> you should probably just write makefiles
22:36:00 <kazinsal> ^^
22:36:00 <GreaseMonkey> cmake isn't magical when it comes to cross-platform stuff but it's not too shabby
22:36:00 <klange> meson, cmake, autotools... these are all makefile generators
22:36:00 <GreaseMonkey> cmake can generate ninja
22:36:00 <GreaseMonkey> iirc so can meson
22:36:00 <heat> if you want to shill for google use gn
22:36:00 <klange> and they all exist for solving the problem of having different build requirements for different _targets_
22:36:00 <wxwisiasdf> my os builds for both s390 and x86
22:36:00 <klange> you don't have different targets for your OS (target != architecture, in this case, only platform, and you ARE the platform)
22:36:00 <wxwisiasdf> oh
22:37:00 <wxwisiasdf> oh i see
22:37:00 <heat> i disagree
22:37:00 <heat> they're incredibly useful abstractions
22:38:00 <wxwisiasdf> i used to have julia as a build system that used wine for using a special gccmvs version that only ran on windows and could make 21-bit programs for s370
22:38:00 <heat> cheers that's horrific
22:38:00 <mrvn> I have julia as fractal generator
22:38:00 <wxwisiasdf> heat: oh yeah did i also mention i used JCL files
22:38:00 <mrvn> wxwisiasdf: long live 31bit with s390
22:38:00 <wxwisiasdf> yes
22:39:00 <wxwisiasdf> i just find myself "build-system-hopping" and i don't know which to choose
22:39:00 <klange> I have this https://github.com/klange/toaruos/blob/master/util/auto-dep.krk
22:39:00 <bslsk05> ​github.com: toaruos/auto-dep.krk at master · klange/toaruos · GitHub
22:39:00 <heat> i know the feeling
22:39:00 <heat> but autotools is definitely not the answer
22:39:00 <GreaseMonkey> from what i've heard, JCL exists just so that COBOL programmers can say "at least it's not JCL"
22:39:00 <mrvn> s390 really makes me want to define CHAR_BITS=10, sizeof(int) = 3;
22:42:00 <wxwisiasdf> GreaseMonkey: jcl exists so you can copy files using 10 lines of code
22:42:00 <klange> An OS is really the ideal thing for using (GNU) Make raw. You don't need to automagically locate libraries, you don't need to figure out how to build DLLs for Windows, you don't need to yank a dozen third-party libs off the interwebs... you maybe want automatic discovery of sources [$(wildcard) or $(shell find)]
22:42:00 <wxwisiasdf> fair
22:42:00 <klange> What sparked all this recent interest in s390? Isn't it a super dead platform the 90s?
22:43:00 <wxwisiasdf> klange: oh not s390
22:43:00 <wxwisiasdf> i am coding specifically for z/arch
22:43:00 <wxwisiasdf> which is basically s/390 but 2022 edition
22:43:00 <heat> klange, wildcard is considered harmful
22:43:00 <wxwisiasdf> the interest mainly came because i don't really know i just wanted to explore something new
22:43:00 <GreaseMonkey> honestly i'd be tempted to use a Tcl script as a build system
22:44:00 <klange> heat: my build systems are all built on lots of $(wildcard) and $(patsubst)
22:44:00 <heat> makefiles don't give you nice things like: readable build system files without 10 macros on top, header deps, builds that rebuild if the build files change
22:44:00 <heat> out of tree builds
22:45:00 <heat> builds with multiple targets (compiling a program for both the host and the target OS)
22:45:00 <GreaseMonkey> "considered harmful" isn't considered enough, because if it were, people would consider it harmful
22:45:00 <heat> it is harmful, that's why new build systems don't have wildcards
22:45:00 <heat> or have it as a "EXTREMELY DEPRECATED DO NOT USE" option
22:45:00 <wxwisiasdf> so uh... do i makefile
22:46:00 <heat> you'll also want to avoid recursive make (which you don't need to do if you're using a build system on top, since that works things out for you)
22:46:00 <heat> makefiles are also slower than ninja
22:47:00 <wxwisiasdf> i guess i will stick for autotools for now
22:47:00 <klange> whatever you do, stop using autotools
22:47:00 <heat> oh jeez oh man not autotools
22:48:00 <klange> you can debate other shit for ages, but autotools is always the wrong answer
22:48:00 <GreaseMonkey> ninja tends to be quicker at working out what not to build, and also does a reasonable calculation of how many processes you can run
22:48:00 <heat> i know i just told you not to use makefiles but if the other option is autotools, go for makefiles
22:48:00 <klange> write out all of your build commands in a static shell script and you've got a better build process than autotools
22:48:00 <GreaseMonkey> but yeah, plain make will get you a decent amount of the way, even if you're going to use wildcard
22:48:00 <clever> i recently saw somebody basically abusing make as a shell script
22:48:00 <klange> hire an intern to manually build everything by hand, and you have a better build process than autotools
22:48:00 <clever> they had a single rule, with 3 commands, that assembled, linked, and objcopy'd
22:48:00 <GreaseMonkey> i think people have used make as a parallel init system
22:48:00 <clever> at that point, its not even make!
22:51:00 <heat> toybox's build system is just a makefile that calls into some shell scripts
22:51:00 <heat> it's horrifying but I think the point is that AOSP can build toybox using toybox
22:58:00 <geist> clever: you can abuse it for fancy shell scripts indeed
22:58:00 <geist> a make with just a bunch of phony rules to run some premade things. actually not half bad
22:58:00 <clever> geist: ive also abused make for rc.d init scripts before
22:58:00 <klange> obligatory `make run` target
22:58:00 <clever> #!/usr/bin/make
22:58:00 <kazinsal> I have definitely written makefiles that are more shell than make
22:58:00 <clever> and then ./foo start, winds up acting like `make start`
22:59:00 <kazinsal> eventually I realized that was a terrible idea and instead made the build system primarily a shell script that called makefiles based on what you shoved into the args
23:00:00 <geist> i generally have a token script called `doit` that just does whatever last thing i was working on
23:00:00 <geist> usually drives make, does whatever configuration i need for whatever
23:00:00 <geist> i just continually hack it so that i dont generally have to do more than run doit for every cycle
23:01:00 <heat> omg GNU make as a shebang is so fucking cursed
23:02:00 <clever> heat: :D
23:02:00 <heat> #!/bin/gcc -O2 -c on .c files?
23:02:00 <klange> Don't you need -f on that?
23:03:00 <klange> heat: can't have multiple arguments in a shebang, that's a no-no
23:03:00 <heat> as usual, afaik it depends on the POSIX system
23:03:00 <heat> because everything in posix is consistent
23:04:00 <GreaseMonkey> i used to make C files have a bit of shell at the start, and the second line was #if 0
23:05:00 <Griwes> <heat> auto lck = std::lock_guard(mtx);
23:05:00 <Griwes> no, better, auto _ = std::lock_guard(mtx);1
23:05:00 <klange> shebangs aren't posix
23:05:00 <Griwes> s/1/!/
23:05:00 <Griwes> it's great
23:05:00 <Griwes> come to the auto side, we have auto
23:06:00 <heat> #!/usr/bin/env -S gcc -O2 -c should work on freebsd/linux I think
23:06:00 <klange> just wrap it in a shell script that adds the other args
23:07:00 <heat> Griwes, what's the next way of creating a lock guard? std::make_lock_guard()?
23:08:00 <Griwes> wdym "next way"
23:08:00 <heat> well C++23 ought to add a new way to do things of course
23:08:00 <heat> it's the C++ way
23:08:00 <Griwes> lol
23:09:00 <Griwes> we are not *quite* that bad yet
23:09:00 <Griwes> it's only make_pair that has been touched every (I think) revision so far
23:10:00 <Griwes> but that's because make_pair is ridiculous
23:13:00 <heat> something something break the ABI and make an incompatible c++ version that doesn't need all these hacks
23:43:00 <kingoffrance> i always use #!<space> because it is some apparently ancient allowed thing, live dangerously seeing if it will break