Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev2&y=21&m=11&d=10

Wednesday, 10 November 2021

01:24:00 <klange> Spun up a little test environment for running ToaruOS off an ext2 partition on a disk, and... okay yeah there's definitely some subtle bugs in this ext2 implementation.
01:24:00 <klange> For one it's making all new files owned by root. Oops.
01:24:00 <klange> But with a little initrd to load the relevant drivers and mount it, it does boot to a GUI at least.
02:22:00 <klange> I installed doom from the package manager, rebooted, and it's still there and working, so at least the basics seem to be functioning.
02:22:00 <klange> And fsck even says my partition is clean, that's surprising...
02:25:00 <Ameisen> hmm, looks like trunk gcc, trunk clang, and msvc.latest are all honoring [[likely]]/[[unlikely]]
02:26:00 <Ameisen> though MSVC has difficulty with an else branch being specified [[likely]] without a corresponding [[unlikely]] on the if
02:29:00 <klange> Ah, dang, I made the mistake of sticking the local package index in /var so the package manager doesn't know Doom is installed after a reboot, but it's there and the desktop launcher exists...
02:37:00 <klange> Oh I didn't -f so fsck lied to me, I see i_blocks inconsitencies...
02:50:00 <vin> Reposting my question: I am little confused, in ext4 if two threads are writing to the same inode/file at different offsets in parallel, would a lock be acquired on the file making the writes sequential?
02:50:00 <Mutabah> Depends on what requirements you want to have
02:51:00 <Mutabah> You don't _need_ a lock (the two writes can just potentially interleave)
02:53:00 <vin> Mutabah: but would the fs implicitly apply a lock to the inode when a process is accessing it?
02:53:00 <Mutabah> Depends on your design
02:53:00 <Mutabah> You would definitely want to lock modification to the inode itself
02:54:00 <Mutabah> (to avoid tearing when updating e.g. the size)
02:54:00 <vin> even if I don't use any locks in my program will the file system convert all accesses to the file (at different) access into sequential
02:55:00 <vin> yes that makes sense, when you are modifying the metadata you should have a lock
02:55:00 <vin> *at different offsets
02:56:00 <Mutabah> The FS could easily just do non-locked accesses for file data, and leave it up to the userland to not do overlapping writes
02:56:00 <jjuran> There's s difference between writes to different disk blocks and non-overlapping writes to the same block
02:56:00 <jjuran> *a
02:56:00 <Mutabah> Or, your FS could require the file to be opened exclusively to be able to write to it
02:57:00 <vin> do know what ext4 does Mutabah?
02:57:00 <Mutabah> ext4 is just a filesystem format, it doesn't (afaik?) specify the semantics of accessing files
02:58:00 <Mutabah> That's the domain of the VFS layer
02:58:00 <vin> So this should be a POSIX standard?
02:58:00 <Mutabah> If you want to know what linux does, I'd check the manpages
02:59:00 <vin> Okay, I will read the VFS docs. Not sure which manpage in particular
03:00:00 <Mutabah> open/read/write/...
03:05:00 <geist> i think you'll find there's not a lot of hard policy on exactly what happens to overlapping writes
03:05:00 <geist> non overlapping doesn't matter which order they appear in
03:06:00 <geist> or at least by the time the write() syscall ends the data should appear to any subsequent read (or mmap) but within that syscall i suspect it's undefined precisely the order it appears in, or what granularity (1 byte? 1 page? 1 disk block? 1 fs block? etc)
03:07:00 <geist> it is assymed if you do two writes in sequence that data A appears before data B, but if they're simultaneous, no guarantees (or at least if there are they're OS specific)
03:08:00 <geist> the hard one is of course O_APPEND
03:08:00 <geist> that one gets tricky, if you have to simultaneous threads appending to the same file, what is the granularity of their writes
03:08:00 <geist> i believe linux makes some guarantees, up to a point
03:08:00 <geist> but the up to a point is sufficiently large that it generally isn't a problem
03:13:00 <jjuran> I would expect simultaneous O_APPEND writes not to clobber each other
03:13:00 <jjuran> which means non-overlapping writes to the same disk block have to not clobber each other
03:14:00 <jjuran> So, writes to a particular disk block should be serialized.
03:14:00 <zid> ext4 has all sorts of nice mount options like async sync noatime blah blah
03:14:00 <zid> to deal with what happens in some of these edge cases
03:15:00 <zid> you just.. pick a behavior when you mount it and the vfs/fs code sorts it out
03:19:00 <vin> geist: but are there gurantees for non overlaping writes to a single file? (to different blocks) -- Can I assume the filesystem won't act as a fence and flushes one write at a time? It doesn't make sense why it should do so especially with ssds which support multiple channels
03:19:00 <geist> i think the general model re: simultaneous O_APPEND is atomically bump the file pointer by the size of the write, write to the old location
03:19:00 <geist> vin: yeah no flushing unless you're operating with sync or whatnot
03:19:00 <zid> vin: why would two writes to two regions interfere anyway?
03:19:00 <zid> it doesn't have to guarentee anything there anyway
03:20:00 <vin> because they both are operating on the same inode? Yes I expect them to not interfere
03:21:00 <zid> All I can think of is both causing an append, might interfere, updating the file size in the wrong order or such
03:21:00 <geist> yah the key is to get the model of what user space is supposed to see with a series of FS operations
03:21:00 <geist> and then work backwards from there
03:21:00 <geist> the FS implementation has to guarantee at least that, but it may be more strict because of internal implementation details
03:21:00 <zid> or way less strict because the vfs never orders it do do anything confusing, depending on design
03:21:00 <vin> So none of the file system metadata data structures are concurrent data structures imo right
03:22:00 <zid> and userspace rules (see: posix) might say that you can't open the same file twice to begin with etc
03:22:00 <vin> yes zid append is good example of requests being serialized
03:22:00 <geist> also there's a model as to what user space sees in a file, and what gets to the disk
03:22:00 <geist> the two can be disconnected substantially, except where fsync/sync come into play
03:22:00 <zid> even posix is crap wrt specifying this stuff
03:22:00 <vin> zid: you don't need to open it twice, open in the parent and pass the fd to children
03:22:00 <geist> so part of the fun is to allow things to be somewhat lazily done physically
03:23:00 <zid> vin: I'd count that as having two open file descriptors for the same file, and thus opened twice
03:23:00 <vin> yes geist
03:23:00 <zid> posix being crappy is why I use sqlite for a lot of stuff
03:23:00 <geist> so in general the user space model is that a file ops are atomic at least with regards to completing the op
03:24:00 <geist> ie, at the end of a write() it has happened to all other observers in the system, etc
03:24:00 <geist> same with truncate, unlink, etc
03:24:00 <zid> It just turns into races, rather than inconsistencies
03:24:00 <geist> that sort of thing. whether or not it happene on the disk is not really specced *except* where syncs and unmounts and whatnot come into play
03:24:00 <vin> I am tempted to write a small benchmark that compares parllel writes to a single file at different offsets (bigger than a block) vs a single thread doing all the writes.
03:25:00 <vin> This should be O_DIRECT though, to avoid paging
03:25:00 <zid> it shouldn't really be measurable unless you're cpu bound
03:25:00 <zid> I can issue 10s of gigabytes of seconds of writes in a single thread
03:25:00 <zid> a second*
03:25:00 <vin> without paging?
03:26:00 <zid> the queue will just fill and it'll start to block
03:26:00 <vin> The max bandwidth I have ever got from an ssd is 6 GB/s
03:26:00 <vin> single ssd
03:27:00 <zid> and fwiw, a little operation queue with thread safe append/pop would probably cover 99% of all the edge cases
03:28:00 <vin> okay so to be clear you expect the parallel writes to a file would be faster than a single writer? zid
03:28:00 <zid> not in the least
03:28:00 <zid> I expect the device to be completely bottlenecked
03:28:00 <zid> by its write speed, and my ability to supply it writes will massively outpace that even on a single core
03:28:00 <vin> why though? modern ssds have thousans of large queues you can submit parallel writes to
03:28:00 <zid> unless you've got like, an optane and one of those 24 core 1GHz webserver xeons or something
03:29:00 <zid> because then you're dealing with a lot of stacked syscall latencies
03:29:00 <zid> on a shit cpu
03:29:00 <vin> I do have an optane ssd with xeon to try this out on
03:30:00 <zid> a nice shitty webserver xeon?
03:31:00 <vin> No but I don't get why a bad cpu is needed to extrapolate the write performance -- if I bypass paging I will be IO bound anyway
03:32:00 <zid> That's precisely my point, you will be io bound
03:32:00 <zid> so who gives a fuck about threads
03:32:00 <zid> threads only matter once you're cpu bound
03:34:00 <vin> but the question is if a single thread is enough to saturate storage bandwidth
03:34:00 <vin> You are saying it should be
03:35:00 <zid> It's absolutely horrifically plenty on any setup you're likely to find in the wild
03:35:00 <zid> The 6GB/s you quoted is 10% of my memory bandwidth
03:35:00 <zid> for example
03:36:00 <zid> where it doesn't work is when you're going for shit loads of iops
03:36:00 <zid> because the overheads eat you
03:37:00 <vin> Okay so let's say we have paging, so we are now bounded by memory bandwidth -- which for sure can't be saturated by a single thread.
03:37:00 <zid> except we're at 10%, not saturated
03:38:00 <vin> Sure. So having multiple threads writing to a file has an advantage now
03:38:00 <zid> no, we need 10x the write bandwidth
03:39:00 <zid> I can completely saturate my ssd multiple times over from a single thread.
03:39:00 <zid> Unless it's an optane and we're talking iops, I don't need any threading at all.
03:40:00 <vin> So pages in memory needs to be invalidated to storage which can be the bottleneck with multiple threads, I get that. So the only time multiple threads will help is when your file fits the memory.
03:40:00 <zid> ???
03:40:00 <zid> I don't understand any of what you just said
03:41:00 <zid> I do sys_write, it tells the device either to dma from my memory, or I memcpy to its internal buffer over the pci-e link at pci-e link speeds. I can memcpy from a single thread /faster than pci-e/
03:41:00 <vin> If your writing to a file and run out of memory to do paging then you will need to flush old pages to the disk by which the program becomes IO bound again.
03:44:00 <vin> But if your file/working set fits dram then you multiple threads can take advantage of mem bandwidth (don't need to wait for pages to be swapped)
04:32:00 <devcpu> p/sb clever
04:32:00 <devcpu> nvm i meant /sb clear
05:16:00 <zid> odd, 1.1.1.1 stopped working for dns
05:16:00 <junon> I've also had issues 1.1.1.1 in the last 48 hours
05:16:00 <junon> that's weird
05:16:00 <zid> damn you cloudflare
05:16:00 <zid> I swapped to 8.8.8.8, I think that's google?
05:16:00 <junon> yes it's google
05:17:00 <junon> google's secondary is 8.8.4.4 btw
05:18:00 <zid> discord seems buggered too
05:18:00 <zid> wonder if some datacenter exploded
05:18:00 <junon> zid are you in germany by any chance? looks like berlin is re-routed right now for cloudflare, idk if that's often or not
05:18:00 <zid> nope
05:18:00 <junon> In fact a number of german datacenters are re-routed at the moment.,..
05:20:00 <zid> and it's back
06:40:00 <geist> yeah i used to try to mix 8.8.8.8 and 1.1.1.1 but cloudflare has gone down more than once on me
06:40:00 <geist> so finally removed it
06:41:00 <graphitemaster> clearly what we need is a single ip address that gives us a lit of dns ip addresses we can cycle through so then you only need to put one dns address in there /s
06:42:00 <graphitemaster> s/lit/list
06:44:00 <moon-child> dns server server?
06:44:00 <geist> well you *can* look up the google one with dns.google.com
06:45:00 <geist> gives you all the aliases
06:45:00 <geist> oddly, dns.cloudflare.com gives you something other than 1.1.1.1
06:46:00 <geist> btw if you're fiddling with your dns stuff, i encourage you to look into DNS over SSL and DNS over HTTPS
06:46:00 <geist> both google and cloudflare support it, and it's nice to cloak your dns traffic, especially on a laptop in a public place
06:56:00 <zid> idk how easy that is on windows
06:56:00 <geist> yah it's not even easy on linux (mint linux at least)
06:57:00 <zid> it's probably easy on my gentoo
06:57:00 <zid> also, my google account is bugged
06:57:00 <zid> I can't load the home page, everything else works, home page gives a 500 error
06:57:00 <geist> what i have is a local dns resolver on my firewall that handles local dns traffic but then talks to 8.8.8.8@853
06:57:00 <zid> search works, account page, gmail, etc all work, google.com is 500
06:57:00 <geist> hmm, not here, so it's at least not globally down
06:58:00 <zid> It's mya ccount
06:58:00 <zid> I asked a friend at google, he says he doesn't know how to open a ticket for it because he's on mail
06:58:00 <zid> but the only open bug that looked similar that he could find was toggling some account setting and it wasn't that :(
07:11:00 <kazinsal> I think I mostly mix 8.8.8.8 and 9.9.9.9
07:12:00 <kazinsal> but I also have a local domain controller with a DNS server because I'm a horrendous nerd
07:37:00 <zid> I'd do it if I was capable of using my router
07:37:00 <zid> but there's a lovely bug in the ISP modem that stops me
08:28:00 <ZetItUp> man i just found a bug in my mm, if i free some space in between some memory it still returns that address on the next kmalloc even if the size is bigger than the hole :P time to debug wtf im doing wrong :P
08:32:00 <zid> oopsie
08:32:00 <zid> what data structure are you tracking with?
08:33:00 <ZetItUp> i kinda followed james molloy's tutorial where you add header/footer of memory spaces
08:33:00 <zid> linked list then?
08:34:00 <ZetItUp> yeah, so i guess i forgot to modify the list :P
08:34:00 <ZetItUp> on free i mean
08:46:00 <graphitemaster> <moon-child> dns server server?
08:47:00 <graphitemaster> moon-child, yes, presumably we'll have a few of those too
08:47:00 <graphitemaster> So then we'll need a dns server server server to consolidate them
08:52:00 <moon-child> https://www.intel.com/content/www/us/en/security-center/advisory/intel-sa-00528.html I wonder if you could use this to dump/change ucode?
08:52:00 <bslsk05> ​www.intel.com: INTEL-SA-00528
08:53:00 <moon-child> oh, twitter says no
09:23:00 <Affliction> Description: Hardware allows activation of test or debug logic at runtime for some Intel(R) processors which may allow an unauthenticated user to potentially enable escalation of privilege via physical access.
09:23:00 <Affliction> When did having physical access stop implying "attackers win"
09:24:00 <moon-child> lol
09:30:00 <klange> complete loss of ipv6 on my DO droplet ;-;
09:30:00 <klange> gateway responds, but nothing's getting past it
09:31:00 <moon-child> Affliction: honestly feels like it's going the opposite way. Post-meltdown et al, people are finally realising that untrusted code on shared hardware will never work, even with a sandbox
09:31:00 <Affliction> hm
09:32:00 <Affliction> Nah, just throw another layer of virtualisation at it, it'll be fiiine
09:34:00 <jjuran> I've discovered a means of performing a denial of service attack via physical access. It works with any processor.
09:34:00 <klange> Does it still work afterwards? :P
09:34:00 <moon-child> ;o
09:34:00 <moon-child> jjuran: https://xkcd.com/1217/
09:34:00 <bslsk05> ​xkcd - Cells
09:34:00 <Affliction> o noes! have a CVE#
09:35:00 <jjuran> klange: In some variations, yes. :-)
09:36:00 <Affliction> Depends on if we're talking about ripping the cables out, or smashing it with a hammer
09:36:00 <klange> I was just wondering if your approach involved a hammer.
09:36:00 <Affliction> Or an etherkiller
09:36:00 <jjuran> No, not a hammer.
09:37:00 <jjuran> Power drill.
09:37:00 <klange> Might not kill the CPU. Might not even make it past the NIC.
09:37:00 <Affliction> service will still be denied!
09:37:00 <jjuran> (Or just pull out the power cable / close the laptop lid.)
09:37:00 <klange> Nothing quite like a good kick with some line voltage~
09:38:00 <klange> > close the laptop lid
09:38:00 <klange> jokes on you I don't have the ACPI support to respond to that
09:38:00 <Affliction> Is that an example of the code working on the developers' system, so they put the developers' system in prod?
09:38:00 <junon> I'm not looking forward to implementing ACPI
09:38:00 <junon> I hear it's hell
09:39:00 <jjuran> Hell is other people's software.
09:39:00 <Affliction> It's... certainly a thing
09:40:00 <klange> ACPI is definitely a lot of other peoples' software.
09:41:00 <geist> in a mixed C/C++ environment, .h vs .hpp (or .hh depending on your flavor). discuss.
09:41:00 <Affliction> void shutdown() { set_fan_speed(0); avx512_powervirus(); } // ACPI is hard, try to shut down by PROCHOT trip
09:42:00 <geist> ie in a sea of C headers, ifyou have a header that's intended to be used for C++ stuff, does it make sense to name it that way, or simply #ifdef __cplusplus the body of it?
09:42:00 <klange> Would probably work on my laptop, but I think it would beep a few times first.
09:43:00 <Affliction> I know in theory modern chips are supposed to throttle
09:43:00 <j`ey> geist: I think just .h
09:43:00 <Affliction> That didn't happen on my 3950X, at least with the old firmware, haven't tested since.
09:43:00 <j`ey> geist: no real reason though
09:44:00 <Affliction> ran straight up to 105C then off.
09:44:00 <klange> obviously if I'm writing c++ headers I've lost my mind and have decided to write a C++ standard library
09:44:00 <geist> j`ey: noted. thats what most things I've used do. it always seemed nice to have a separate extension
09:44:00 <klange> and thus my C++ headers should have the suffix ""
09:44:00 <geist> ugh... that's so terrible
09:44:00 <geist> not your fault, the C++ people's fault
09:45:00 <geist> thoughi guess with a // vim: tag its at least usable
09:45:00 <klange> It _was_ a cheeky way to clean up standard #include's.
09:47:00 <geist> trying to think of the nicest way to declare C++ wrappers around C things
09:47:00 <geist> probably simplest and most useful is to declare he C things in a header, followed by the C++ wrappers in a C++ specific section just afterwards
09:48:00 <klange> I slapped the #ifdef __cplusplus crap in a shared header under a pair of _Begin_C_Header and _End_C_Header macros.
09:48:00 <geist> ie, struct mutex {} and then a class Mutex { struct mutex; } right after it in a #ifdef __cplusplus
09:48:00 <geist> yah
09:49:00 <geist> kk. yeah. probably the simplest. that way you can pick your poison
09:49:00 <geist> C++ code can use the fancier versions, but the headers aren't particularly weird for them
09:49:00 <klange> I haven't actively written C++ since my days in robotics, and I even just removed support for C++ from my build system...
09:50:00 <klange> (Not that meaningful of a change, just stopped sticking libstdcxx on the base image and removed one 'hello world')
09:50:00 <geist> i've been starting to write more and more subsystems in LK in C++. guess a few years of doing it in zircon is bleeding through
09:50:00 <geist> not that i particularly want super fancy bits, but i have to admit things like RAII lock guards and whatnot are darn handy
09:51:00 <geist> so makes sense to at least provide simple wrappers and lock guards and whatnot for folks that want to use them around the standard primitives
09:51:00 <geist> just spent a few hours converting the PCI bus driver to C++. since it was basically already just object oriented C anyway
09:52:00 <zid> so now you have a vtable and it links slower too, yay? :p
09:53:00 <geist> well, no. it was already a vtable
09:53:00 <geist> oh also check this out: `time make -j` `real 0m0.208s`
09:54:00 <geist> oh wait, that was with ccache. without `real 0m0.810s`
09:55:00 <geist> but noted. my experience is C++ doesn't start really slowing down until you start drinking from the template fountain
10:13:00 <junon> compilation? or runtime?
10:13:00 <j`ey> compilation
10:13:00 <junon> right
10:14:00 <klange> I have some C stuff that takes several seconds...
10:15:00 <klange> Like my editor. Or when I throw all of the source files for my interpreter at one gcc...
10:25:00 <geist> Yah i gotta respect projects that do the whole ‘compile in one command’ thing
10:26:00 <geist> Honestly surprised more stuff doesn’t just do that
10:28:00 <klange> My editor only works that way because I keep it as one file - it's its own stress test.
10:28:00 <junon> I wrote my own build system to do that
10:29:00 <moon-child> geist: I still want to write a c compile which does its own caching and parallelization
10:29:00 <moon-child> such that 'compile in one command' is faster than anything else you could do
10:30:00 <klange> I want to write a C compiler at all
10:30:00 <geist> Seems that llvm could pull it off pretty easily if the clang front end just stamps out N copies of the compiler internally
10:30:00 <geist> And splits it across files you pass it on the command line
10:31:00 <geist> I had heard that clang driver somewhere along the line stopped forking itself to run internal steps, because it can instantiate the compiler bits, run it, then tear that down and instantiate the linker, etc
10:32:00 <moon-child> ideally you would do those in parallel, pipelined
10:32:00 <klange> I should try to write a C compiler following the same model as Kuroko's compiler. Just straight up mash out machine code while you parse...
10:32:00 <moon-child> so semantic analyser handles one function, then sends it to the code generator which is running on another thread. So you can generate code for the first function at the same time as you semantically analyse the second function
10:33:00 <moon-child> obviously obviates ipo. And depends on c/c++'s in-order semantics
10:33:00 <geist> I don’t know if that’s feasible on modern compilers nowadays. I think they probably need to look at too much global state to do that
10:33:00 <moon-child> klange: that's what tcc does!
10:33:00 <geist> Years ago I remember you could drive GCC by typing into stein and watch it generate asm as you wrote C
10:33:00 <geist> But somewhere along the way even basic -O0 needs to see too much before it outputs asm
10:33:00 <geist> S/stein/stdin
10:34:00 <geist> Stupid autocorrecting client
13:26:00 <ZetItUp> found some old osdev post which started with this sentence: I'm developing an operating system and instead of programming the kernel, I'm developing the kernel.
15:44:00 <Bitweasil> When you write the kernel with a laser pointer on film, you indeed have to develop it to see what you've done!
16:06:00 <kingoffrance> i sent my kernel to therapy, it wont be out for a long time lol
16:06:00 <kingoffrance> its developing, in counseling, under lockdown lol
16:07:00 <kingoffrance> we write every year or so
16:07:00 <kingoffrance> our relationship is "developing"
16:13:00 <kingoffrance> well, maybe they meant "design"
16:13:00 <kingoffrance> then, that post sort of makes sense...
17:36:00 <sikkiladho> Hi. I'm trying to create a chain between bare metal binaries on raspberry pi 4 to learn about the bare metal world. I have two binaries.
17:36:00 <sikkiladho> el2-kernel.img - it prints Hello to UART.
17:36:00 <sikkiladho> kernel=el2-kernel.img
17:36:00 <sikkiladho> initramfs el1-kernel.img 0x400000
17:36:00 <sikkiladho> I use these configs to load both binaries at different addresses. In el2-kernel.img, I simpy jump to 0x400000 (using eret and relevant values in spsr_el2 and elr_el2(the address). I have been succesfull in doing so. Now I want to jump from el2-kernel to the standard linux kernel. I have been tried to do it for a few weeks but the standard kernel
17:36:00 <sikkiladho> won't run. I have also kept the address to dtb in ram in x0.
17:36:00 <sikkiladho> Here's my code: https://github.com/SikkiLadho/Leo
17:36:00 <bslsk05> ​SikkiLadho/Leo - Leo Hypervisor. Type 1 hypervisor on Raspberry Pi 4 machine. (0 forks/0 stargazers)
17:48:00 <geist> SikkiLadho: oooh nice
17:48:00 <geist> clever: !
17:48:00 * geist points clever at SikkiLadho
18:05:00 <clever> geist: using the initrd like that was an idea i gave him a few days ago!
18:07:00 <sikkiladho> yeah, thank you for that clever. I can print Hello and World by using two separate binaries loaded at different addresses(and at different exception levels). Why can't I do the same with the kernel?
18:08:00 <clever> SikkiLadho: not sure, simplest way to get an answer is to rig up jtag and see what happens
18:11:00 <jjuran> geist: .h for headers usable from C, .hh for headers only usable from C++.
18:14:00 <gog> .hhhhhhhhhhhhhh for headers that are so bad they give you an asthma attack
18:14:00 * gog stares at every header in glibc
18:19:00 <clever> gog: haskell has a accursedUnutterablePerformIO for when you really want to give people an asthma attack! :P
18:20:00 <clever> https://hackage.haskell.org/package/bytestring-0.10.8.1/docs/src/Data-ByteString-Internal.html#accursedUnutterablePerformIO
18:20:00 <bslsk05> ​hackage.haskell.org: Data/ByteString/Internal.hs
18:21:00 <clever> > It lulls you into thinking it is reasonable, but when you are not looking it stabs you in the back and aliases all of your mutable buffers.
18:21:00 <clever> lol
18:22:00 <gog> "witness the trail of destruction" lol
18:23:00 <clever> gog: https://www.reddit.com/r/haskell/comments/6ygxrv/accursedunutterableperformio/dmnftn6/
18:23:00 <bslsk05> ​www.reddit.com: accursedUnutterablePerformIO : haskell
18:23:00 <clever> basically, its a way of doing something with side-effects, in a pure expression that shouldnt have side-effects
18:24:00 <clever> and in one example, the compiler trusts you a little too much, and assumes `accursedUnutterablePerformIO mallocWord32` will always return the same thing
18:24:00 <clever> so every malloc returns the same addr
18:24:00 <clever> because you disabled every safety in the language
18:26:00 <gog> i should really learn more about functional programming
18:26:00 <gog> because i cannot grok this
18:28:00 <clever> gog: have you heard of SSA?
18:28:00 <clever> https://en.wikipedia.org/wiki/Static_single_assignment_form
18:28:00 <bslsk05> ​en.wikipedia.org: Static single assignment form - Wikipedia
19:36:00 <vin> Does CoW have poor bandwidth utilization? if so I don't see why?
20:22:00 <junon> lol j`ey I should have just asked in here
20:22:00 <j`ey> :p
20:23:00 <junon> I knew I recognized the nick
20:23:00 <geist> vin: i wouldn't say that no
20:23:00 <geist> or at least it's not a concrete enough problem statement to say yes or no
20:24:00 <geist> if you're trying to benchmark some specific scenario i'm sure some situations may be slower than not cow, but depends on what it is
20:25:00 <sortie> Really depends on how much data you attach to each cow and how long it takes to move the cow from point A to point B
20:25:00 <Bitweasil> If you get a good stampede going... bandwidth improves dramatically!
20:26:00 <j`ey> geist: what does zircon have, re filesystems?
20:26:00 <geist> sortie: haha nice. that was a real missed opportunity there
20:27:00 <geist> j`ey: we have a few custom ones and a new one in development and simple support for fat and ext4
20:27:00 <j`ey> neat
20:31:00 <geist> https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/src/storage/fxfs/ is the new one in development. looks pretty neat
20:31:00 <bslsk05> ​fuchsia.googlesource.com: src/storage/fxfs - fuchsia - Git at Google
20:33:00 <j`ey> written in rust, cool
22:56:00 <Ameisen> https://umu.diva-portal.org/smash/get/diva2:1566002/FULLTEXT01.pdf
22:56:00 <Ameisen> So, I find those results interesting.
22:56:00 <Ameisen> Reduced latency is something that, as I recall, MuQSS was meant for.
22:57:00 <Ameisen> and better interactivity - though it fails that test as well.
23:17:00 <Ameisen> though MuQSS development has been halted, as he doesn't want to keep forward-porting itl
23:17:00 <Ameisen> so that leaves... CFS and PDS.