Search logs:

channel logs for 2004 - 2010 are archived at ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present

Thursday, 13 January 2022

01:24:00 <geist> heat: re:thinlto yeah it's not too bad
01:25:00 <geist> a few times slower than a non thinlto, but nothing like a full LTO. though of course the slowness of LTO i thnk goes up perhaps nonlinearly with the complexity of the code base
01:26:00 <heat> I let my laptop compile llvm with ThinLTO and it took like 140 minutes
01:27:00 <heat> so like +100 minutes which is not too bad for an older laptop
01:27:00 <heat> taking into account it just freaking LTO'd
01:27:00 <heat> I imagine using a big workstation-ish CPU you could do it in what, 20-30 minutes? amazing
01:41:00 <geist> good question. i'll give it a try later today to see
01:42:00 <geist> how did you configure it for thinlto vs lto?
01:42:00 <geist> oooh i'll build it on the arm workstation
01:43:00 <heat> it's the -DLLVM_ENABLE_LTO= switch on the llvm cmake
01:43:00 <geist> =thinlto?
01:43:00 <heat> =Thin, =On (Full), =Off
01:43:00 <geist> kk
01:43:00 <heat> then you also have -DLLVM_PARALLEL_LINK_JOBS=N if you want to limit parallel link jobs
01:43:00 <geist> good idea
01:43:00 <geist> i only have 128GB on this machine
01:44:00 <heat> hahahaha "only"
01:44:00 * geist flexes
01:44:00 <geist> well, 256 cores and 128GB. so its a bad ratio
01:44:00 <gog> that's like my whole SSD in this thing
01:44:00 <klange> I think 64GB is the chipset maximum for what I'm running in this box...
01:45:00 <klange> yarp
01:45:00 <bslsk05> ​ Intel Core i56600K Processor 6M Cache up to 3.90 GHz Product Specifications
01:46:00 <zid> nice webserver you've got there
01:47:00 <heat> I think consumer intel only goes up to 128GB anyway
01:48:00 <zid> I go up to 275GB and nobody has ever come up with a reason for such a weird number that's convincing
01:48:00 <klange> I had a 64-bit Atom board that only supported 2.
01:48:00 <zid> 375GB*
01:49:00 <zid> need me some 46.875GB dimms and an 8 slot mobo
01:50:00 <geist> hmm, think my x570 ryzen goes up to 128
01:50:00 <geist> guess that's not the chipset though, that's the ryzen. 4 channels of 32
01:51:00 <geist> well 2 channels, but two sockets epr
01:51:00 <geist> per
01:51:00 <gog> does that require NUMA?
01:54:00 <geist> nah that's a non numa
01:54:00 <geist> so far the numa ryzen stuff was isolated to just the first gen threadripper, where it had multiple full dies on board
01:54:00 <gog> also does the M1 have NUMA because of how it is?
01:55:00 <geist> when they went to zen2 and above they moved to a single IO die (with the memory controller) + a variable number of cpu chiplets
01:55:00 <geist> i dont think M1 does no
01:56:00 <gog> hm interesting
01:56:00 <heat> don't the really big epyc chips with really high memory-cpu density have NUMA?
01:58:00 <geist> ah fun. i already oomed the system when doing a non lto build but with -j256
01:58:00 <geist> there's a compile phase in clang build where it just has 256 compilers running with >= 500MB active memory
01:58:00 <zid> it's still numa but on the same scale as L3
01:58:00 <heat> hah
01:59:00 <heat> if you feel better my rpi zero 2 w has 4 cores and 512MB
01:59:00 <zid> it has to track across the chiplet things and do requests across them and blah cus it's 2x channels on 4 chiplets, not 8 channels
01:59:00 <heat> it OOMs compiling trivial stuff :)
01:59:00 <geist> that's what i mean by saying 128GB isnt enough for a 256 core machine. the ratio of GB/cpus should be at least 1, if not 2 or 4
01:59:00 <zid> I have.. 6x because I am weird
02:00:00 <zid> (and I couldn't be bothered to get matching kits)
02:00:00 <geist> generally if i can do it, i try to keep at least 2GB/core. so my main ryzen box is 32c/64g
02:00:00 <geist> etc
02:00:00 <geist> OTOH, this arm core is SMT4, and honestly past benchmarks have shown me that 4x SMT doesn't really help for compiles and whatnot
02:01:00 <heat> what does it help with?
02:01:00 <geist> that's a good question
02:01:00 <heat> compute?
02:01:00 <geist> presumably something with a lot of data stalls or whatnot
02:01:00 <geist> maybe databasy stuff
02:03:00 <geist> but for things like compiling linux kernel or qemu or whatnot i think i've seen that SMT2 is like 30% faster and SMT4 is basically within a few percentage of SMT2
02:03:00 <zid> I always call them webserver cpus
02:04:00 <geist> so it really doesn't make sense to run the cpu in SMT4 mode, but it's just too exotic *not* to
02:04:00 <zid> where you need a meg or two of memory for a thread of apache, but a lot of https on the cpus
02:07:00 <geist> huh okay tht's odd. i am rerunning with -j128 and watching linux load up one node (it's a two node system) and leave the other one basically idle
02:07:00 <geist> that's not what it should be doing!
02:07:00 <geist> 90% utilization of all the 128 threads of one cpu socket, while the other one is lik e7%
02:08:00 <heat> something cool popped up in upstream LLVM yesterday: BOLT
02:09:00 <heat> you profile your program and run bolt on it, and it re-arranges your binary's layout optimally
02:10:00 <geist> neat
02:13:00 <zid> so like pgo?
02:26:00 <heat> pgo for code layout yeah
02:28:00 <zid> pgo already re-orders switches and generates .hot and .cold and stuff I thought?
02:31:00 <heat> possibly but bolt does a lot more, not just .hot and .cold
02:31:00 <heat> and you don't need to recompile it
02:32:00 <heat> says you just need a symbol table and for max performance, relocations
02:32:00 <zid> oh, that's neat
03:40:00 <NiD27> Hey guys a novice getting into bootloaders here, The osdev wiki suggests that boot signature as byte 510 as 0x55 and 511 as 0xAA but the linked IBM resource suggests last 2 bytes(510, 511) should be 0xAA55 those 2 arent the same right?
03:41:00 <NiD27> which would be the correct one 0x55AA or 0xAA55
03:41:00 <klange> The IBM doc is considering it as a 16-bit value.
03:42:00 <klange> As bytes, it's 0x55 0xAA; as a little-endian 16-bit value it's 0xAA55.
03:42:00 <NiD27> oh ok gotcha
03:42:00 <NiD27> thanks :)
03:47:00 <geist> where on the wiki does it say 0x55 0x55? we should fix that if so
03:48:00 <geist> er 0xaa 0xaa
03:48:00 <NiD27> I meant byte 510 = 0x55 and byte 511 = 0xAA
03:49:00 <NiD27> its correct but since the IBM had it as 0xAA55 it confused me but now I know that its little endian 16bit it makes sence
03:58:00 <kazinsal> in practice I think most clone-era and later BIOSes just accepted either one
04:00:00 <geist> or didn't care. it was only a defacto standard that there had to be one there
04:01:00 <geist> huh. looks like you can get a free VM on a z system?
04:01:00 <geist>
04:01:00 <bslsk05> ​ Get started with IBM LinuxONE – IBM Developer
04:01:00 <geist> that might be fun to fiddle with for a little bit
08:33:00 <kazinsal> the more I read the VAX ARM the more I wish microVAX had become the dominant VLSI CISC in the early 80s instead of x86
08:39:00 <klys>
08:39:00 <klys> article 1/3 starts on page 82/116
09:50:00 <geist> kazinsal: right? the exotic one off instructions are really not the interesting part
09:50:00 <geist> the regularity of the main ISA is pretty elegant
09:54:00 <geist> also reading vax really informs how much the motorola folks must have been fan when they did 68k
09:58:00 <geist> nice thing 68k has is the base word size being 16 bits really makes the base opcode a bit more flexible
09:58:00 <geist> vax opcodes are still fundamentally 8 bit, and they also ran out, had to start adding escape opcodes, etc over time
10:00:00 <geist> also id ont think there' smuch of a pattern to the original opcode layout, but since it was explicitly microcoded from the get go, there's no real reason to have any pattern there. it just selects lines in a microcode rom
10:01:00 <GeDaMo> IBM 360 opcodes use halfwords (16 bits)
10:01:00 <bslsk05> ​ IBM System/360 Green Card - Disinfotainment
10:01:00 <geist> GeDaMo: yah not surprising
10:02:00 <GeDaMo> The opcode itself is still 8 bits but you don't have to fit operands in there too
10:02:00 <kazinsal> yeah, the vax architecture is just extremely well designed. they were totally aiming to make an ISA they could sell for 15+ years and still have consistent and backwards compatible and they accomplished that
10:02:00 <geist> in fact: if you look at the VAX opcodes by byte, they simply started at 0 and counted up to looks like 0xDF. the opcodes are laid out basically in the order they were added it seems
10:03:00 <geist> and then what appears to be the one and only escape opcode was 0xFD, and then they started over: 0xFD00 0xFD01 0xFD02 ...
10:04:00 <kazinsal> and you have a bunch of really handy futureproofed features like automatic system segment base/limit loading on traps so you can basically implement syscall trampolines with machine registers decades before anyone really thought that would be handy
10:04:00 <geist> if you haven't looked at it yet the whole interrupt level and the 15 software irqs are really mncie too
10:05:00 <geist> you can see how VMS was built around it, and then subsequently NT picked up the same design, even if they had to implement it in software
10:05:00 <kazinsal> one thing this older (1987) copy of the manual doesn't elaborate on though is how multiprocessor machines worked
10:05:00 <geist> yah i dont think there's a rich supply of atomic SMP features except for the atomic linked list add/remove
10:05:00 <kazinsal> my understanding is that the VAX-11 wasn't technically an SMP system but VAX 8000 series and later were
10:05:00 <geist> which is also kinda a game changer
10:06:00 <GeDaMo>
10:06:00 <bslsk05> ​ Digital Technical Journals machine-readable archive
10:06:00 <kazinsal> in my readthrough of the VAX ARM I also noticed opcode 0B, CRC
10:06:00 <geist> true, though there *was* a VAX 11-782 which was dual processor
10:07:00 <kazinsal> full arbitrary-length CRC32 calculation in microcode, in 1977
10:07:00 <geist> though wikipedia says 782 was mostly a main cpu + secondary
10:07:00 <geist> so possible it's not truly SMP
10:07:00 <kazinsal> yeah, the 11/782 had two processors but apparently at least on VMS the primary one handled all I/O and scheduling
10:09:00 <geist> the living computer machine i still have an account on is apparently a quad core 7400
10:09:00 <geist> which is of course a very late model and pretty fast (quad 125MHz NVAX++)
10:12:00 <kazinsal> in an alternate, possibly better timeline there are 128-CPU SMT multi-gigahertz VAX machines in common use
10:12:00 <geist> if you haven't seen it there's a fantasic set of internal info about the later microprocessor based vaxes
10:12:00 <geist>
10:12:00 <bslsk05> ​ DEC Microprocessors
10:13:00 <geist> see all the links under VAX. was written by an insider at DEC (one of the managers of the whole thing) and has a buncho f stuff including internal microcode
10:13:00 <geist> was of course post VAX-11 and the non microprocessor based ones, but there's some lore in there abuot how the microprocessor team was competing with the discrete logic team, and of coruse eventually won
10:14:00 <geist> which is why the later series big vaxes were eventually based on high end microprocessors
12:17:00 <gog> aaaay i can do syscall now
12:18:00 <gog> but not sysret (yet)
12:18:00 <g1n> gog: nice!
12:18:00 <gog> long way to go before i can properly say i have user-mode
12:18:00 <gog> baby steps
12:19:00 <gog> should probably have a handler for #GP now tho
12:20:00 <GeDaMo> DO you really want user mode? Users tend to make a lot of complaints :P
12:20:00 <zid> My usermode is *such* a hack :p
12:20:00 <zid> I kept breaking it too
12:20:00 <zid> IRQs are annoying
12:21:00 <gog> yeah i get the feeling i'm gonna have my own troubles with it
12:22:00 <gog> this shit is held together with scotch tape and paperclips
12:22:00 <zid> mine works, and by works I mean, will break immediately if anybody touches it
12:22:00 <gog> here in #osdev we make "quality" software, that is to say, software with the quality of being incredibly fragile
12:22:00 <gog> in that sense, i'm much like my own code
12:23:00 <gog> GeDaMo: thankfully i have no users. somebody did fork my repo but they're about a dozen commits behind now
12:24:00 <klange> "bad" is a quality
12:24:00 <klange> "low" is a quality
12:24:00 <zid> my main issue making it fairly jank is just that I don't have.. anything else to mesh with it
12:24:00 <gog> if you file a bug report i will literally cry
12:25:00 <zid> Adding the first user program meant I didn't have a task subsystem, etc
12:25:00 <gog> yeah i still don't have that lmao
12:25:00 <zid> my bootloader just leaves some crud at 2MB for my kernel to.. blindly jump to
12:25:00 <klange> if it makes you feel any better, this is the sort of stuff that, after having gotten it working a decade ago, I dare not touch out of fear
12:25:00 <zid> because I don't have a filesystem either
12:26:00 <gog> i'm probably gonna jump back into my loader shim
12:26:00 <gog> it's the only other "separate" piece of code i load
12:26:00 <klange> I have only the vaguest notions of what this magic incantation does any more
12:26:00 <bslsk05> ​ toaruos/user.c at master · klange/toaruos · GitHub
12:27:00 <gog> heh yeah looks shaky
12:27:00 <gog> but if it works it ain't broken
12:27:00 <zid> I have you beat
12:27:00 <bslsk05> ​ boros/task.c at master · zid/boros · GitHub
12:28:00 <gog> hey at least you're zeroing out all the registers
12:28:00 <gog> good security
12:28:00 <zid> yea I threw it in when I moved the code block to its own file
12:28:00 <zid> instead of being... in main()
12:31:00 <gog> question re: tlb, if another cpu doesn't ever use a mapping do i have to issue a shootdown when i unmap it?
12:31:00 <gog> not really relevant yet, just curious
12:32:00 <zid> cpu issues shootdowns by itself
12:32:00 <zid> the question is whether you have to invlpg it for yourself, and the question is yes
12:32:00 <gog> yes i do that
12:33:00 <gog> and also it does?
12:33:00 <zid> tlb shootdown is qpi nonsense
12:33:00 <zid> there's no instructions for that
12:34:00 <gog> i don't follow
12:37:00 <zid> you'll only need to do manual ipis and handlers and counters and stuff when you're dealing with shared stuff
12:39:00 <gog> ok yeah i see that TLBs are per-core so if another core never accesses the page it never caches the mapping
12:39:00 <gog> excellent
12:40:00 <zid> the reloading of cr3 when the task swaps flushes the tlb anyway (except for global=1 pages) so you'll only care about the TLB when making a permission stricter within a process
12:40:00 <zid> so if you unmark a page as writeable you'll need to tlb flush it to get the correct perm bits into the tlb entry
15:24:00 <NiD27> Hey guys having some confusion with assembly intel uses mov DEST, SRC and AT&T uses mov SRC, DEST right?
15:24:00 <gog> yes
15:24:00 <NiD27> can you please link me some good docs on it?
15:25:00 <NiD27> I'm finding loads of conflicting info
15:25:00 <gog> with regards to what exactly?
15:25:00 <gog> the GNU assembler manuals are the best reference for its syntax
15:25:00 <gog> and for intel syntax the NASM manual is decent enough
15:25:00 <NiD27> oh ok thanks thats what I was looking for
15:25:00 <NiD27> GNU uses AT&T correct
15:26:00 <gog> you can also use intel syntax with GAS with some caveats, there are some instructions that don't work quite right in intel mode
15:26:00 <gog> can't recall off the top of my head which
15:26:00 <gog> but by default yes its at&t
15:26:00 <zid> I'd say more realistically it has made different annoying decisions about what intel syntax 'should' be (like how nasm syntax is nasm syntax not true intel)
15:26:00 <gog> yeah
15:27:00 <NiD27> oh for a newbie which would be better to learn you would say AT&T or intel or nasm version of intel
15:28:00 <zid> if you intend to exclusively hack on gnu projects with inline assembly, at&t, otherwise intel
15:28:00 <gog> i think for new people nasm has the least confusing rules
15:28:00 <gog> at&t requires signifiers for registers and immediates whereas nasm doesn't
15:28:00 <gog> and even more confusingly, gas supports intel syntax with prefixes or without
15:28:00 <NiD27> the % and $ right
15:28:00 <gog> yes
15:29:00 <zid> AT&T is what you get if you use a 70s assembler for VAX and just rename the registers to get 'intel support'
15:29:00 <NiD27> Its so confusing most wikis fail to mention what they are using and ofc even hello world fails to work because I either added section or I didnt lol
15:30:00 <NiD27> From a newbies perspective its confusing I mean
15:30:00 <zid> sections are specific to the assembler in use, they're nothing to do with the cpu itself, and everything to do with the object file it creates
15:30:00 <zid> 'learning assembly' is easy, but you'll still end up needing to know about ABI, ELF, etc on top
15:31:00 <zid> so learning them all simultaneously is.. a thing
15:31:00 <gog> yeah it's a lot
15:31:00 <NiD27> yup I found that out the hard way not understanding why my perfectly copy pasted code wont compile lol
15:31:00 <gog> unless you're just gonna play with bare metal real mode for the time being
15:31:00 <zid> bare metal real mode is how I learned :D
15:31:00 <zid> I made fire demos and stuff
15:31:00 <gog> same
15:31:00 <NiD27> I've heard about ELF while compiling but I'll write that down to read more
15:32:00 <NiD27> oh I did intend to go into protected mode on bare metal but from what I gather I should mess around more in real mode
15:32:00 <gog> eh, it's not strictly necessary
15:33:00 <gog> protected mode is more useful to learn about since you're probably not reimplmenting DOS or smth
15:33:00 <gog> unless you are lol
15:33:00 <zid> pmode is the osdev side
15:33:00 <zid> I wouldn't go there until you're comfortable with osdev concepts too
15:33:00 <zid> you can do basically *nothing* without a lot of background concepts
15:34:00 <gog> yeah there's a lot to understand about how the computer does The Thing™
15:34:00 <zid> That's why I like gameboy
15:34:00 <zid> easy assembly flavour like real mode x86, but very basic mmio to teach you about hw
15:34:00 <gog> yes
15:35:00 <zid> (NES is a worse cpu and a vastly worse hw model, anyone who suggests NES should be shot)
15:35:00 <gog> NES is 6502-like and GB is more like 8080 right?
15:35:00 <zid> yea NES is an actual 6502
15:35:00 <gog> ah thought so
15:35:00 <zid> gb is some weird z80/8080
15:36:00 <gog> yes
15:36:00 <zid>
15:36:00 <bslsk05> ​ gbops - The Game Boy opcode table
15:36:00 <gog> sharp l435902
15:36:00 <gog> lr35902*
15:36:00 <zid> It's a very nice little pcu
15:36:00 <NiD27> I might try to reimpliment DOS or smth in the future my main goal is to learn as much as possible
15:37:00 <gog> funny that it ends in "02", made me think for a second that it was a 6502 variant
15:37:00 <zid> It sounds like a type of battery to me
15:37:00 <zid> LR-359-02
15:37:00 <gog> yeah that too lol
20:26:00 <graphitemaster> osdevers I'm doing a poll. When you think of a mutex do you think of something that prevents execution of critical sections of code, or do you think of something that protects shared access to data. Like what is your mental model. Do you think the only use-case of a mutex is to protected a shared resource or can you go an abstraction lower and see it's use for defining an execution order?
20:27:00 <graphitemaster> Do you think a mutex protects data or does it protect the execution of code (which as a side-effect is often how you achieve the behavior of protecting data).
20:29:00 <kingoffrance> but code is data ...<waves hand>...[disqualifies self]
20:30:00 <gog> yeah if you're into von nuemann machines
20:30:00 <kingoffrance> ^
20:30:00 <GeDaMo>
20:30:00 <bslsk05> ​ Self-replicating spacecraft - Wikipedia
20:30:00 <gog> this is an interesting question though
20:31:00 <zid> I think of it as a primitive for mutal exclusion
20:31:00 <zid> wouldn't be the first gerund in the english language
20:31:00 <zid> does 'party' mean to party, or a party
20:32:00 <sham1> kingoffrance: may I interest you in some Lisp
20:32:00 <graphitemaster> My mental model is that it protects the execution of code. I often say "it protects code" but I'm seeing how that could be misunderstood. Recently got in an argument about it. I rarely think about the data itself, even though that's what it achieves and it's primary use-case is. I think looking at it as protecting data is wrong and I've be called out on this take so I'm genuinely curious if this is an institutional thing.
20:32:00 <sham1> Code indeed is data
20:32:00 <zid> The reason that code exists is to protect the data, but the reason that data exists is to protect other *code* from running :P
20:32:00 <gog> any locking mechansim can be said to prevent the execution of code though
20:33:00 <zid> and that code probably operates on some data..
20:33:00 <sham1> Anyway, you'd usually want a mutex to prevent parallel access to shared resources. So you get mutual exclusion going. So I'd say that mutexes are fundamentally about data
20:33:00 <graphitemaster> Right but a mutex does not need to be used to protect data. You can have situations where the mutex exists just to define an order of say items in a menu, or lines in a file, and if you remove it you have no data-race, but you just don't have the expected order you'd want.
20:34:00 <sham1> I'd argue those could still count as shared resources
20:34:00 <sham1> That you don't want to grant concurrent access to since you get the wrong order
20:34:00 <sham1> s/concurrent/parallel
20:34:00 <sham1> Because concurrency != parallelism
20:35:00 <zid> at the end of the day, all code exists to produce/manipulate data
20:35:00 <zid> If all your program does is calculate a value then toss it away, you've written haskell, not a useful program
20:36:00 <graphitemaster> You could also do it just to define an execution order where it's not even necessary to, like in the case of a scheduler you may want to use locks because it would be more efficient if tasks ran in this specific order on the processor, you can imagine having a critical section held for any task that has a large register state because you know scheduling those would evict cache more.
20:36:00 <sham1> Processor time is still shared resources
20:36:00 <sham1> I don't think any of these examples contradict me
20:36:00 <graphitemaster> Right but that's not shared data.
20:36:00 <graphitemaster> That is a shared resource.
20:36:00 <sham1> Indeed. It's resources yeah
20:36:00 <graphitemaster> But the question was about shared DATA.
20:37:00 <zid> You're off in the weeds already
20:38:00 <GeDaMo> You asked a leading question! :P
20:38:00 <zid> Figure out what you're trying to argue first
20:38:00 <zid> and what the implications of your interpretation are
20:38:00 <graphitemaster> Do you think of a mutex as a mechanism to protect data - or a mechanism to control the execution order of instructions.
20:38:00 <graphitemaster> I think that's my question.
20:39:00 <sham1> Neither
20:39:00 <zid> neither
20:39:00 <graphitemaster> What do you think of it as then?
20:39:00 <zid> It's a primitive to implement mutal exclusion
20:39:00 <sham1> I think I've made my position clear. Allow for someone to have mutual access to shared resources
20:40:00 <graphitemaster> So sham1's definition includes any resource, not just data, but also time and higher-level concepts of a resource like items in a menu (for example)
20:40:00 <GeDaMo> It serialises resource access
20:40:00 <zid> That's what it's for, not what it is
20:40:00 <zid> that's like saying a gun is "for deadening birds"
20:41:00 <GeDaMo> You're a bird! :P
20:41:00 <zid> A hot one too
20:41:00 <GeDaMo> Fried chicken? :|
20:42:00 <graphitemaster> Well what it's for is kind of important. Can a mutex be used for MORE than mutual access to shared resources?
20:42:00 <graphitemaster> I think it can.
20:42:00 <zid> You can use it as part of whatever you like
20:42:00 <zid> the question is then is it mandatory, useful, etc
20:42:00 <zid> which is not your question
20:43:00 <kingoffrance> thats why you name your gun peacemaker. it makes peace. its what it is *and* what it is for /s
20:44:00 <graphitemaster> zid, Then why would people think almost _exclusively_ a mutex as being only one of it's use-cases?
20:44:00 <zid> says who?
20:44:00 <zid> wikipedia: "In computer science, a lock or mutex (from mutual exclusion) is a synchronization primitive"
20:45:00 <zid> what some people think shouldn't factor into what something is anyway
20:45:00 <zid> Then your question becomes "Why do these people think x?"
20:46:00 <graphitemaster> Yeah but wikipedia also goes on to talk about shared data immediately after that and when you talk to people they discuss it in terms of protecting data, rather than that being a use-case. The focus is on the data, rather than the synchronization part and how it controls execution, primarily. Rust even goes as far as to make it Mutex<T> where T is the data tthat needs to be protected by this mutex.
20:46:00 <zid> No, it immediately takes about shared *resources*
20:46:00 <zid> Then describes a use case
20:46:00 <zid> And again, weeds
20:47:00 <graphitemaster> Well "a data object"
20:47:00 <graphitemaster> What ever that means
20:47:00 <graphitemaster> But again, discussing shared resources here is only a use-case. It doesn't make the mutex that.
20:49:00 <graphitemaster> I never think about data when I'm using a mutex. I'm thinking about which pieces of code I don't want running at the same time because it would be bad for them to be, not only because they would be operating on the same resource but other reasons too. The protection of data is a side-effect of controlling the execution, it's not the purpose of the mutex when I decide to use one.
20:49:00 <zid> go for it
20:49:00 <zid> but that doesn't change what it is
20:49:00 <sham1> Well let's turn the question to you. What do *you* think a mutex is
20:50:00 <zid> paradigms are a thing, contrats
20:50:00 <zid> grats*
20:50:00 <graphitemaster> I think a mutex is a tool for preventing multiple threads of execution from executing blocks of code that are designated as critical sections.
20:51:00 <graphitemaster> At the same time.
20:51:00 <sham1> Yes, and what would happen if they were to be executed in parallel in the critical sections
20:51:00 <sham1> Why do you want to prevent it
20:51:00 <graphitemaster> I want to prevent it because the code itself is serial in nature.
20:51:00 <sham1> But then why is it parallel
20:52:00 <graphitemaster> Because not all of it is serial, only the critical section needs to be serial.
20:52:00 <sham1> Alright. And why do the critical sections have to be serial?
20:52:00 <sham1> All about them leading questions
20:53:00 <zid> he knows he just doesn't wanna say :p
20:53:00 <graphitemaster> They have to be serial because either the data they mutate is shared, in which case you'd have a data race, or because there's a specific order the operations must be done in to get the right result, e.g I want to compute (a + b) * c, not a + (b * c)
20:54:00 <sham1> Indeed, and what do you get when you generalise those two concepts?
20:54:00 <graphitemaster> The latter example has nothing to do with data as far as I'm concerned, but maybe I'm wrong.
20:54:00 <zid> you don't think numbers are data?
20:54:00 <sham1> The computation is still a shared resource. It's a bit of a stretch but w/e
20:55:00 <graphitemaster> I don't think the execution order is data, no.
20:55:00 <zid> the result sure is
20:55:00 <zid> "It gives the wrong result"
20:55:00 <zid> is a pretty large catchall for all of this
20:55:00 <zid> and the result is going to be some memory being different somewhere, no matter what your program /actually/ does.
20:57:00 <graphitemaster> I think that's a stretch, but also generalizing a mutex to only be about the data is so alien to me from a mental understanding. Like when I hear people literally describe and speak of it in this context my brain glitches out because that's not my experience or process of thought, even though I seem to arrive to similar solutions.
20:57:00 <sham1> And that's why it's not actually data but resources. People just say data and everyone knows which page we are on
20:57:00 <zid> Wait til you hear about people teaching children multiplication via set theory
21:00:00 <geist> graphitemaster: good question re: mutex
21:01:00 <geist> *usually* it's explicitly used to protect access to a particular piece of data, so in general i'd say the latter
21:01:00 <geist> and then the code comes along for the ride, because you're also keeping the code that acts on the data exclusive
21:01:00 <geist> since most of the time you dont have code that operates on no data at all
21:01:00 <geist> obviously can find counterexamples
21:02:00 <zid> I just don't think it's useful to think about
21:02:00 <zid> It's on completely the wrong layer
21:04:00 <zid> It's absolutely conflating what a thing is with what a thing can be useful for. A screwdriver is a physical tool, me using it as a make-shift chisel shouldn't blow your mind "whoa I didn't know you could do that"
21:04:00 <zid> because you learned it as "a thing for undoing screws"
21:05:00 <graphitemaster> zid, The problem is how it's taught I think. I'm sensing that it's more of a stretch for someone to realize the screwdriver can be used as a chisel because they can only think of it as a screwdriver.
21:05:00 <graphitemaster> Than it is to realise it can be used for anything requiring a sharp straight edge.
21:06:00 <zid> sounds like you're just smarter at physical objects than you are at cs concepts, not surprising
21:07:00 <geist> i really never formally learned most of that stuff either, so makes sense for me to think that way
21:08:00 <geist> all my CS classes at college were more practical matters since i did a computer engineering degree
21:08:00 <geist> kinda glad to be honest, never did any of the formal language/concepts classes which sound boring to me
21:08:00 <zid> Yea they do sounds INCREDIBLY boring
21:08:00 <sham1> Meanwhile I love the theoretical stuff. Like the maths is so interesting
21:08:00 <sham1> I just love it
21:09:00 <geist> sure. its different for other folks
21:09:00 <gorgonical> yeah I also got a degree in math because I loved it too
21:09:00 <geist> though i do have to say the boolean algebra class was pretty good and actually useful
21:09:00 <sham1> I do CS but quite a bit of maths alongside that
21:09:00 <gog> i think the class i enjoyed the most was digital logic
21:09:00 <zid> I see CS classes as 'talking about graph theory of data structures you will never ever implement' :P
21:09:00 <geist> what i am however perpetually out of sync with is these sort of cs concepts that folks use across languages
21:09:00 <zid> rather than things like digital logic
21:10:00 <gorgonical> I took my university's offer to get a "second" degree in math. like 75% of the load of a double. I really liked automata and advanced algo analysis
21:10:00 <sham1> Automata are very neat
21:10:00 <gorgonical> advanced algo was unironically the hardest thing I've ever done. We got grilled for 1.5 hours and the homework took 5 hours per class. I loved it lol
21:10:00 <geist> yah analysis of algos was fairly useful
21:10:00 <geist> i didn't really remember most of the stuff it taught, except it gave you the idea of thining about things as O(x)
21:11:00 <geist> which i think is fairly helpful
21:11:00 <gorgonical> oh yeah. we harped on worst-case performance which was something I hadn't really considered in undergrad as a primary design goal
21:11:00 <zid> I learned about O notation just super passively
21:11:00 <zid> it comes up often enough that you can figure it out
21:11:00 <sham1> O notation is horrendous
21:11:00 <zid> You can look it up if you actually care, almost none of the time you actually will though
21:12:00 <sham1> Whose idea was it to make sets be equal to functions in this context
21:12:00 <geist> but again, i think the key is to lightweight know it
21:12:00 <sham1> Why is f(x) = O(g(x)) instead of f(x) \in O(g(x))
21:12:00 <geist> it may be mathematically horrendous, but to be casually aware of it is a nice leveller
21:12:00 <sham1> It bothers me more than it probably should
21:12:00 <zid> It's a useful thing just as a shortcut for talking about why that other guy's an idiot for picking bubblesort
21:12:00 <zid> or whatever
21:12:00 <gorgonical> I think O() notation is really useful when it's embodied in certain concepts. You think about it indirectly in the data structures you use, etc. I have to date not done a real-world analysis using it
21:12:00 <graphitemaster> zid, Not sure what it has to do with physical objects. I just prefer if things are not taught with the implied use-case and layer of abstraction built onto it, I think that's kind of damaging in a way. It's a lot like when you transition into higher education and they tell you "forget everything you learned about X because it's wrong, here's the REAL truth"
21:13:00 <zid> but nobody should be sitting down and figuring out what the complexity of any *particular* sort is
21:13:00 <geist> right, i suspect actual math folks may find it terrible, but if you casually know it and apply it and provided you're not just wrong, it's useful way to communicate things
21:13:00 <zid> unless they're doing a cs phd
21:13:00 <geist> gorgonical: precisely
21:13:00 <geist> graphitemaster: thing is lots of CS things are concrete things to solve a problem
21:13:00 <zid> graphitemaster: you've picked a prescribed use-case, you're opposite of what you're saying
21:13:00 <geist> because they can be reused is secondary
21:14:00 <zid> you've also picked the *less* useful one, to a lot of people
21:14:00 <graphitemaster> geist, That's what makes CS less science I think, feels more like physics than maths.
21:14:00 <zid> You've said you *only* see screwdrivers as chisels
21:14:00 <kingoffrance> "It's a lot like when you transition into higher education and they tell you "forget everything you learned about X because it's wrong, here's the REAL truth"" this happened in elementary school for me -- forget everything you know, the future is all cursive
21:14:00 <graphitemaster> No, I see screwdrivers as just a cylinder shaft with a weird shape on the end.
21:14:00 <zid> nope
21:14:00 <kingoffrance> somehow they hadnt thought of typewriters, let alone computers
21:15:00 <zid> We covered all this earlier
21:15:00 <zid> what a mutex *is* is a synchro primitive (metal thingy), you can *use* to it prevent code running / prevent state errors / whatever
21:16:00 <zid> You've repeatedly said you view it as blocking certain code from running
21:16:00 <zid> That's the chisel
21:16:00 <geist> also if you use any of my screwdrivers as a chisel imma kick your ass
21:16:00 <gog> ^
21:17:00 <zid> How appropriate, you fight like a cow.
21:17:00 <zid> Wait, I clicked the wrong retort
21:17:00 <geist> well, i guess getting kicked by a cow would suck
21:17:00 <geist> s/cow/moose and graphitemaster will really understand
21:18:00 <sham1> Anyway, if you get more physics than mathematics from CS, I'd say that you get taught CS incorrectly because it's basically a subfield of applied mathematics. Hell, it has proofs and such
21:21:00 <geist> ehhh. thats a very mathematician way of viewing it
21:21:00 <geist> it's like saying industrial engineering is a subfield of carpentry
21:21:00 <zid> industrial engineering is a subfield of redneck engineering
21:21:00 <geist> or vice versa maybe
21:22:00 <graphitemaster> zid, I'd say the chisel is the lowest abstraction of the mutex, there is no cylinder shaft with a weird shape on the end parallel for a mutex.
21:22:00 <geist> depends on if you see the algorithmic proof, etc stuff in CS or the practical 'here are a bunch of building blocks you should know to write code'
21:22:00 <graphitemaster> The only shared resource is very screwdriver though
21:22:00 <zid> well you're wrong, sorry
21:22:00 <graphitemaster> Well what is the lower abstraction of a mutex then
21:23:00 <graphitemaster> Give me the cylinder parallel
21:23:00 <sham1> I mean for me, there's a difference between software engineering and computer science. Of course an engineer should know some CS and the CS person should probably have some familiarity with how software is engineered. But the fields are somewhat different still
21:23:00 <zid> so sham1 is a CS purist?
21:23:00 <geist> fair. and may be the definitions of SE have changed over the years
21:23:00 <sham1> Lower abstraction of a mutex? A spinlock, for example. A file descriptor can also be used as so
21:23:00 <geist> in my school SE was far more about planning things, gantt charts, flow charts, etc
21:23:00 <geist> seemed to be mostly the project management part than the actual doing
21:24:00 <zid> CS is the mathetmatical study of.. things you might wanna use a computer for unless you have a LOT of paper.
21:24:00 <geist> but i get the impression that that is either weird (even at the time) or it has morphed a bit
21:24:00 <sham1> Well there's also that and architecture stuff from what I've understood
21:24:00 <zid> sham1 refuses to beeeive computers exist :p
21:24:00 <zid> believe*
21:24:00 <zid> "Arthur, get the ream of A4, I've discovered a new self-balancing binary tree"
21:25:00 <graphitemaster> I refuse to believe that registers exist.
21:25:00 <graphitemaster> What am I?
21:25:00 <geist> a troll?
21:25:00 <graphitemaster> A stack machine
21:25:00 <kingoffrance> bound to reinvent registers
21:25:00 <geist> oh. well true
21:25:00 <geist> register denialist
21:25:00 <kingoffrance> its a good jedi mind trick if that is your intention :)
21:26:00 <zid> There's always the argument that a stack is just registers with a weird access mechanism
21:26:00 <zid> but I won't be having it
21:26:00 <graphitemaster> No cap, arn't most registers on a machine just backed by one massive register file and the CPU is renaming and allocating from that file all the time, the actual registers don't really exist do they XD
21:26:00 <graphitemaster> illusionary registers
21:26:00 <geist> someone here a while back mentioned some idea that you could build a machine that has a bunch of registers (say r0-r31) but each of those is a pointer to memory and a stack
21:26:00 <gorgonical> "no cap." are we at that point where people this young are in osdev? Even I get to be old now
21:26:00 <geist> so you could push/pop onto r5
21:27:00 <sham1> zid: I see you are familiar with the x87 FPU
21:27:00 <zid> sham1: exactly
21:27:00 <gorgonical> graphitemaster: Yes that is also my understanding
21:27:00 <graphitemaster> I'm almost 30 dude, I'm just around too many zoomers these days.
21:27:00 <graphitemaster> no cap, sheesh, it's entered me lexicon
21:27:00 <gorgonical> Yeah the ephemeral registers was how I was taught in my architecture grad course
21:27:00 <zid> now he's krabs from spongebo
21:28:00 <gorgonical> good now I can continue living in denial that I'm actually getting anything done
21:28:00 <geist> gorgonical: what is 'ephemeral registers'? can haz one sentence summary?
21:28:00 <gorgonical> Just a sec, let me consult the textbook
21:29:00 <geist> i've probably heard of it i just dont know what the term is referring to
21:29:00 <sham1> gorgonical: man, I'm 23 years old and even I feel old when people say stuff like "no cap"
21:29:00 <zid> he means that registers are 'fake' because of register renaming, presumably
21:29:00 <zid> so there's an internal_reg_47_holding_eax which is ephemeral
21:29:00 <geist> i heard a young NPR news person the other day say 'a minute' in the gen z sense
21:29:00 <geist> in an actual news report
21:29:00 <zid> chotto a minute
21:29:00 <gog> computers are fake
21:29:00 <graphitemaster> it's a hint of a register, a register la croix, the natural essence of a register
21:29:00 <sham1> zid: disgusting
21:29:00 <zid> sham1: sorrymasen
21:30:00 <sham1> '_>'
21:30:00 <graphitemaster> when you trick rocks into thinking, sometimes you have to trick people into thinking the rocks being tricked into thinking think a different way from how they actually think.
21:30:00 <geist> zid: that sounds too concretely referring to a CE concept than something that would be discussed at CS level (re: internal register renaming)
21:31:00 <zid> It's not a trick, you're just bad at engineering
21:31:00 <geist> if you trick the rocks too much they'll get sad
21:31:00 <sham1> There's a reason why abstractions exist
21:31:00 <zid> Registers *is* how the software interface works
21:31:00 <zid> regardless of how it's implemented
21:31:00 <geist> then you have sad sand
21:31:00 <geist> and then thats when you get Spectre
21:31:00 <kingoffrance> yeah, ghostbusters had that
21:31:00 <kingoffrance> they had to play music to cheer it up
21:32:00 <geist> hah the ooze?
21:32:00 <graphitemaster> spectre is what happens when you don't think enough about your own tricks, classic trick mistake.
21:32:00 <graphitemaster> everyone knows if you're going to make a trick to really think about the trick to infinite
21:32:00 <geist> caveman science fiction: AM IS GODS?
21:33:00 <sham1> Tricking the infinite is my new prog rock album
21:33:00 <graphitemaster> Like when I was designing a soft core I thought I was genius for inventing a special hardware lock instruction where you specified how many instructions after that instruction you wanted to behave atomically. I thought I was genius, it was just an immediate count of instructions proceeding it that behaved atomically. Then I realized after I tried the trick that it was a terrible trick
21:33:00 <graphitemaster> It was fun for a moment though
21:34:00 <gorgonical> geist: I can't seem to find the exact ref in the book, but my prof did sort of sprinkle current research on us in the class. IIRC he was describing that the hardware is smart enough to "allocate" slots from the file with renaming, but I can't be sure anymore
21:34:00 <geist> oh well that may just be referring to one of many register renaming tricks yeah
21:34:00 <sham1> graphitemaster then:
21:34:00 <geist> and in some ISAs it's even explicit (IA-64, etc)
21:36:00 <graphitemaster> In the end after experimenting with different types of atomic methods, the ARM monitor method turned out to be the best. I think x86's is ridiculous, it's so excessive, like finding out it's raining out and instead of taking an umbrella you take a flame thrower and point it in the air and evaporate all the water before it hits you
21:36:00 <sham1> x86 is just legacy
21:37:00 <graphitemaster> The analogy of a flame thrower is accurate for x86 because it also runs hot.
21:37:00 <zid> other chip makers' wishes aren't reality
21:37:00 <zid> perf/watt on x86 is extremely world class
21:38:00 <sham1> We need RISC-V to save us. Sadly RISC-V doesn't have a bswap-like instruction, instead requiring multiple things to do the endianness conversion. Oh well, t'is the RISC way
21:38:00 <zid> Don't worry, extremely clever silicon will save it, maybce once it has macro-op fusion and branch pre- oh wait
21:38:00 <zid> people treating a random toy architecture as the second coming of christ still boggles my mind
21:39:00 <graphitemaster> I'll never take any architecture seriously until they solve the page table scalability issue. The moment you want a lot of cores (like I do), you need a lot of RAM to store page table entries and as long as we need excessive amounts of ram to solve that, the impossible it'll be to have tiny devices with lots of cores because random access storage is not cheap
21:39:00 <zid> It's a log relationship and completely irrelevent
21:39:00 <graphitemaster> Like I want 4096 core CPUs here
21:39:00 <zid> You're just a weird idealist hung up on not very important to the engineering aspects
21:40:00 <graphitemaster> x86 it's pretty bad from what I read for Linux, like a 128 core CPU requires at minimum 64 GiB to boot according to the Linux docs, just for kernel data structures and PTEs
21:40:00 <graphitemaster> That's with hyper-threading btw
21:40:00 <gorgonical> zid: is that perf/watt statement still true? i.e. why are the supercomputer vendors really eyeing up ARM64 if not for that?
21:40:00 <zid> It requires a few meg
21:41:00 <gorgonical> my understanding from riken was that ARM64 had a distinct edge there
21:41:00 <zid> and none of that is due to page tables
21:41:00 <zid> page tables lose you ram out of your ram, scaling with the total amount of ram you have
21:41:00 <zid> it's a log relationship to your max memory
21:41:00 <zid> s/max/installed
21:41:00 <zid> arm64 doesn't have a distinct edge unless it has a distinct edge for the exact code you will be running
21:42:00 <zid> You might get more perf/dollar with arm cpus, more realistically
21:42:00 <zid> supercomputers have a *lot* of cpus in them.
21:45:00 <zid> if your load is incalculable trivially in NEON compared to AVX, you want x86, if avx is overkill and you don't need huge inter-cpu memory bandwidth you want arm, if you want speed gains from an incredibly loose memory model and it's the 80s, you want alpha
21:45:00 <zid> etc
21:45:00 <geist> i was looking into whether or not liux supports native 16K pages on arm64, and surprisingly it doesn't seem to, or it has gotten the feature extremely recently
21:45:00 <geist> but that seems like a distinct advantage to arm server stuff: larger page sizes
21:45:00 <zid> if your load cares, it may not
21:46:00 <j`ey> geist: it supports it, its what im using on my m1
21:46:00 <zid> also x86 has 2M pages if your load isn't in that sweet spot where 16k is magically better than 4k but worse larger
21:46:00 <geist> j`ey: oh? guess it got added pretty recently then
21:46:00 <geist> i knew it had 64k pages for a while, but i hadn't seen any 16 explicit support
21:46:00 <zid> 64k does seem like a nice size
21:46:00 <mxshift> Every time page sizes came up at Apple and Google, it was shot down by finding a bunch of apps/code that made deep assumptions about page sizes trying to get performance.
21:47:00 <mxshift> 4K is baked in so many places
21:47:00 <zid> 2MB is *slightly* big for most things atm
21:47:00 <geist> except OSX, which explicitly uses 16k pages on ARM
21:47:00 <geist> and iOS, so apple definitely pulled that trigger a while back
21:48:00 <j`ey> geist: It's been in there for years
21:48:00 <geist> i bet the moment they made their first 64bit core, since i dont think apple ever used a non-apple designed 64bit core
21:48:00 <geist> and the first ARM 64bit cores didn't support 16k. (cortex-a53, -a72, etc)
21:49:00 <mxshift> For iOS, page size had no legacy for the first version or so
21:49:00 <j`ey> geist: 16K has been supported since 4.10, that's 2015
21:49:00 <geist> j`ey: huh! I thought i saw references to patches on LKML in like 2020 for true 16k support
21:50:00 <geist> maybe it was something deeper, like the kernel used 16k but the file cache was still 4K and thus was shoring that part up
21:50:00 <j`ey> one of the problems for M1 is that the IOMMU layer in Linux doesnt support mismatched differences between CPU page size and the IOMMU page size
21:50:00 <zid> oh that's weird
21:50:00 <j`ey> the IOMMU page size is 16K, fixed, but we want to run a 4K page size kernel
21:50:00 <zid> thanks apple
21:50:00 <geist> yah perhaps that was it. they had a way to just fake out the rest of the kernel into thinking it was always allocating 4 4K pages back to back, but the MMU was always mapping them as a true 16K
21:51:00 <geist> but really plumbing that through to the rest of the kernel is the hard part
21:51:00 <geist> from my poking arond on the M1 it's a bit strange in that it explicitly supports 4K and 16K base page granule, and *not* 64K
21:51:00 <j`ey> so for now, most people are running 16K kernels on the m1, but once that is fixed, people will probably go to 4K, since thats what distros do
21:51:00 <geist> which is of course perfectly valid according to the v8 spec
21:52:00 <geist> whereas most ARM-designed cores are 4/64 or 4/16/64
21:52:00 <geist> i think hypothetically you can design an ARM core that doesn't support 4K but that'd be probably a Bad Idea
21:52:00 <j`ey> redhat use 64K
21:52:00 <geist> but if someones gonna do it, it'd be Apple
21:53:00 <geist> my suspicion is they mostly leave the 4K support around for when they're running x86 code under Rosetta
21:53:00 <j`ey> yeah
21:53:00 <geist> i bet they run that aspace in 4K mode sinc eyou can dynamically swtich when you load the TTBR0
21:53:00 <geist> er its a field in TTBCR0 but easy to swap on context switch
21:54:00 <j`ey> TCR_EL, but yeah
21:54:00 <geist> ah yeah that
21:55:00 <j`ey> m1 support is going quite well, I got X11 (CPU rendered) and wifi and stuff on mine last week
21:56:00 <geist> cool. i ordered a new macbook pro m1 but wont get it until februrary, but not gonna pave it
21:56:00 <geist> it'll be my main laptop
21:56:00 <zid> I think paving using M1 laptops would get expensive
21:57:00 <j`ey> zid: lol
21:58:00 <j`ey> geist: you ca install linux on the mini then
21:58:00 <geist> sure, but i *like* using the mini with macos
21:59:00 <geist> i have no particular desire to turn it into Yet Another Arm Linux Machine
22:00:00 <zid> will it run freedos or does it only do long mode in rosetta?
22:02:00 <j`ey> geist: finnne
22:04:00 <geist> j`ey: i mean yeah. that's the real reason i dont wnt to pave my macbookpro
22:04:00 <geist> zid: i'm guessing the latter, though i assume you're also joking
22:04:00 <gog> pave?
22:04:00 <zid> well, freedos is definitely not linux
22:05:00 <geist> gog: well overwrite the macos install with something else
22:05:00 <gog> ohhhh
22:05:00 <gog> i had no idea there was a specific term for this activity
22:05:00 <zid> me either
22:05:00 <geist> dunno where i picked that up, i guess i assume everyone knows it
22:06:00 <geist> but now that i think about it its pretty much a specific term
22:06:00 <geist> we use it at work to mean 'completely wipe out a computer and put a new OS down'
22:06:00 <geist> like pave that phone with android, etc
22:06:00 <zid> geist is a zoomer afterall
22:06:00 <gog> i've literally never heard this term in that context
22:07:00 <geist> nah i think i've used the phrase pave a device longer than many zoomers have been alive
22:10:00 <gorgonical> I remember using that term when I was getting into computers in the mid 00's
22:10:00 <kazinsal> I've always used "flatten" but I've heard "pave" before
22:11:00 <j`ey> "The Fuchsia install process, called 'paving',"..
22:11:00 <gorgonical> I remember we'd install dban onto a crappy usb and pray the motherboard supported usb boot
22:16:00 <gog> i'd always say "wipe"
22:16:00 <zid> I'd say.. "install"
22:16:00 <zid> It doesn't need an object
22:17:00 <zid> install windows, rather than paving over linux with windows, or whatever
22:17:00 * gog paves over zid
22:17:00 <zid> always so kinky smh
22:19:00 <raggi> We do both flashing and paving
22:19:00 <zid> can you install some cornice around my desktop?
22:20:00 <raggi> The key difference is that paving is higher level logical operation, it depends on a running high level kernel to execute (we run a small fuchsia build to do it), and the data that is provided as input to a pave requires more complex processing than data sent to a rom or bootloader for flashing
22:21:00 <raggi> Such as, in paving on x64 we do a significant compression of the fvm artifacts, whereas we only do a basic sparsijg of fvm for flashing
22:22:00 <zid> what about the turbo encabulator though
22:22:00 <raggi> We're sort of deprecating paving, with an eye to replacing it with flashing and recovery, but the road for these things is long. You can see details in various rfcs
22:23:00 <raggi> Oh that's why we deprecated paving really - we have a flux capacitor now, where were going we don't need roads
22:24:00 <Affliction> But 2015 was 7y ago
22:25:00 <raggi>
22:26:00 * kazinsal paves over gog with fishies
22:26:00 * gog chomps her way out
23:02:00 <sham1> How does one pave with fishes
23:07:00 <kingoffrance> i supposedly had a relative who did that, decades ago, for farming. my mom remembers. it "worked" eventually. its a type of compost :)
23:07:00 <bslsk05> ​ Morrison Gravel - Organic Fish Compost
23:24:00 <kazinsal> sham1: carefully
23:44:00 * FireFly paws the fishes