Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev2&y=22&m=5&d=18

Wednesday, 18 May 2022

01:21:00 <heat> hello c++ helpdesk
01:21:00 <heat> why does this not work https://godbolt.org/z/PrPonYz6T
01:21:00 <bslsk05> ​godbolt.org: Compiler Explorer
01:28:00 <gamozo> 0 C++ experience here, can you really template a conditional like that? I feel like the compiler might not be smart enough to constprop through the conditional
01:29:00 <heat> yup
01:29:00 <heat> the problem remains even if I switch the conditional for StaticBitmap or DynamicBitmap
01:29:00 <gamozo> really?
01:29:00 <heat> yup
01:29:00 <gamozo> this builds for dynamic bitmap for me
01:29:00 <gamozo> https://godbolt.org/z/f3aqMEvq6
01:29:00 <bslsk05> ​godbolt.org: Compiler Explorer
01:30:00 <heat> ok, staticbitmap doesn't
01:30:00 <gamozo> yeah, must be static-map specific I guess
01:30:00 <gamozo> haha, I only tried dynamic
01:30:00 <Griwes> heat, this->
01:31:00 <heat> huh
01:31:00 <heat> why?
01:31:00 <Griwes> you need to do member accesses into dependent base classes with this->
01:31:00 <Griwes> it's because the language doesn't know the base class until it's instantiated
01:31:00 <Griwes> but it needs to have already done the first phase of lookup at that point
01:31:00 <heat> only for templated stuff like this?
01:32:00 <Griwes> only for classes with dependent base classes, specifically
01:32:00 <Griwes> otherwise you already know the members of the base, so you can just do lookup into them
01:33:00 <gamozo> Wow, yeah, that's wild. Using explict `this->` works
01:33:00 <gamozo> (I wish C++ had explict `this->` tbh, I can't stand now "knowing where a variable comes from")
01:34:00 <gamozo> I think that's one of the reasons I'm okay with templating/generics/traits in Rust but not in C++
01:34:00 <heat> wdym?
01:34:00 <heat> explicit this-> works fine
01:34:00 <heat> but it's just noise
01:34:00 <Griwes> yeah it's super noisy most of the time
01:34:00 <gamozo> Yeah, I just wish it was enforced. It tells you about where to look for the variable definition and I think mades code significantly more readable
01:34:00 <Griwes> painfully noisy even
01:35:00 <heat> you could enforce it with a custom clang-tidy check i guess
01:36:00 <gamozo> Yeah. It doesn't really bother me about my own code, I just find it very confusing when reading other code, like Chromium. The project is so massive and when you see a variable you have no idea where it's declared
01:36:00 <heat> probably in the current class
01:36:00 <gamozo> Could be up 2000 LoC in the function start, 5 classes deep, etc. It's one of my gripes with C++, I think it's fine for code you're working on (because you're familiar with it), but really annoying with foreign code
01:36:00 <heat> but yes I get what you mean
01:36:00 <gamozo> yeah
01:36:00 <Griwes> I run editors where I get semantic highlighting of everything, and that solves the problem much more elegantly
01:36:00 <heat> it's a problem with all OOP languages
01:37:00 <Griwes> this-> and self. littered all over the place just eats up so much space
01:37:00 <gamozo> I do security research so 99% of my time is spent auditing other peoples code. So I get to see a _lot_ of styles
01:37:00 <Griwes> I'd probably not mind having something like a prefix dot to access a member of this/self
01:38:00 * Griwes makes a mental note for when he goes back to his language again
01:38:00 <gamozo> Yeah. That actually sounds nifty
01:39:00 <Griwes> (I already have the syntax used in places, I just never thought about using it more generally)
01:39:00 <gamozo> It's more for readability, not writing. Writing I feel like C/C++ let you do _whatever_ style you want lol
01:39:00 <Griwes> yeah the problem is that this->/self. actually *hurts* readability IME, though your mileage may vary
01:40:00 <heat> rust solves this
01:40:00 <heat> write rust
01:40:00 <heat> rust best language
01:40:00 <gamozo> A lot of it is probably just my lack of experience in C++ where I don't really expect implict data to be accessed
01:40:00 <heat> rust cured the blind and died for our sins
01:40:00 <gamozo> The "path to the value" is very unclear
01:40:00 <gamozo> Ahaha
01:40:00 <gamozo> I think Rust is too verbose for a lot of people, personally I love that aspect of it
01:41:00 <heat> a good editor with a language server is pretty useful for C++
01:41:00 <gamozo> Yeah, I think that makes it pretty usable. Unfortunately, a lot of the code I have doesn't build or integrate into language servers ahaha.
01:41:00 <Griwes> what i really want is to win a lottery so I can quit work so I can make my language a thing so I can actually write what i *want* to write
01:41:00 <heat> when you don't remember how the code/classes/structures look
01:41:00 <Griwes> but, alas
01:41:00 <gamozo> Griwes: that sounds awesome
01:42:00 <gamozo> Arguably OSdev is my path to that, but with OSes
01:42:00 <heat> gamozo, how doesn't it build with language servers?
01:42:00 <heat> is it not standard GCC/clang code?
01:42:00 <gamozo> heat: :shrug:
01:42:00 <gamozo> Oh yeah, lots of Windows stuff
01:42:00 <heat> i don't know how intellisense works but that's also a thing
01:42:00 <gamozo> Lots of things that are cross-compiled or built in hacky environments
01:42:00 <Griwes> in the ideal world, I'd be writing the OS in my lang :V
01:43:00 <gamozo> vim + ctags are still, unfortunately, the best solution for most of what I do
01:43:00 <gamozo> At least, with my workflow. It's very possible I just have a bad workflow
01:43:00 <gamozo> Griwes: that sounds transcendent
01:43:00 <heat> in your sillicon
01:43:00 <gamozo> sillycon
01:44:00 <heat> true nirvana
01:44:00 <Griwes> if I had infinite time, it'd be on my own ISA, yeah
01:44:00 <heat> i o w n t h e c o m p u t e r
01:44:00 <Griwes> but even in the dream world I know there's limits and that I have to choose my battles :P
01:44:00 <gamozo> Intel still has ME running somewhere on the chip
01:44:00 <gamozo> Wait you can choose your battles instead of just fighting all of them and losing them all?!
01:44:00 <heat> maybe y o u h a v e b e c o m e M E
01:45:00 <heat> you can fight one and waste all your life doing it
01:45:00 <heat> turns out operating systems are complicated haha
01:45:00 <Griwes> gamozo, yeah :| that's how you can tell I'm approaching 30
01:46:00 <gamozo> I'm in that same boat, 28
01:46:00 <gamozo> Same hard decisions
01:47:00 <gamozo> I have decided osdev is in scope, I am allowed to write my own code for my own OS instead of Linux. That's a recent decision I had to make
01:47:00 <heat> what would be the other option
01:47:00 <gamozo> Probably writing hefty amounts of drivers for Linux
01:48:00 <heat> that's also fun
01:48:00 <heat> different, but fun
01:48:00 <heat> you can also
01:48:00 <heat> do
01:48:00 <heat> both
01:48:00 <gamozo> I'm not happy with really any OS-es virtual memory management, I really think page-tables are underused for unique data structures
01:48:00 * gamozo sweats
01:48:00 <heat> whats your idea
01:49:00 <gamozo> Oh, I just do a lot of things with NUMA and separated page tables based on your numa node (turning the computer into separate computers effectively)
01:49:00 <gamozo> It's the model I've been doing for my past few OSes and it's the only way I scale well with hardware
01:49:00 <gamozo> TLB shootdowns make virtual memory management soooo slow, whuch then makes anything that requires a lot of mapping/unmapping non-viable.
01:50:00 <heat> you can delay tlb shootdowns
01:50:00 <heat> or avoid them altogether
01:50:00 <gamozo> I'm only comfortable with not doing TLB shootdowns unless the data is exclusively owned
01:51:00 <heat> huh?
01:51:00 <gamozo> If not doing a TLB shootdown _could_ be invalid, then I'm not comfortable not doing it. Like if I don't know if the data is being shared between cores
01:52:00 <heat> ah
01:52:00 <heat> well, it depends
01:52:00 <heat> if its kernel memory, you need to shoot it down if it has been accessed (A bit in the page tables, or whatever soft bit you have)
01:52:00 <gamozo> Personally, I've been doing every-core-gets-its-own-page-table and I really like it
01:53:00 <heat> in user memory, you can take a look at the A bit and at the active cpu set (some cpu set where you track where the process has been)
01:53:00 <heat> well, not has been but is
01:54:00 <gamozo> Makes sense. I just really hate the round-trip-time between cores (I wish Intel gave us direct access to the cache coherency bus for IPC or something)
01:54:00 <gamozo> I only really do compute though, so I really really care about even small costs
01:55:00 <gamozo> I do a lot of differental stuff with page tables and stuff so I'm constantly modifying them or traversing them. I also do a lot of cross-core aliasing of memory to their node-local copies (thus at runtime I don't have to check between addresses)
01:57:00 <Griwes> my main problem with tlb shootdowns is that I am really trying to keep the core parts of the kernel (as opposed to threads that stay in the kernel mode) not interruptible, which means that I will probably need to make all in-kernel lock poll loops keep doing a check for "is someone requesting that I do a tlb shootdown" so that I don't trivially deadlock when two threads of the same process attempt to unmap something
01:57:00 <gamozo> I've been recently doing my OSes with a Rust-like memory model, your virtual memory is either A. shared and immutable, or B. exclusive and mutable. I really really like it and it really helps me get all the perf out of my chips
01:58:00 <Griwes> yes I am fully aware that I *will* have bugs with this, can't wait to debug those deadlocks
01:58:00 <gamozo> Griwes: Ugh, yeah. I used gross NMIs and a slight amount of state
01:58:00 <Griwes> yeah I considered NMIs for a second but eventually recoiled in terror
01:59:00 <gamozo> As any healthy person should
01:59:00 <Griwes> the rust model at VAS level means that you can never have lock free programs and they are just inherently cool (when they work)
01:59:00 <Griwes> :P
01:59:00 <gamozo> Oh yeah, I effectively only do lock-free now and I love it
02:00:00 <Griwes> gamozo, oh I don't know about the "healthy" part
02:00:00 <gamozo> When I got my Phi and I saw that `lock inc` takes 30,000 cycles, I realized very quickly that even just shared memory was really slow
02:00:00 <gamozo> (in theory shared memory is good for IPC as it's the fastest IPC, but that you can kinda do "rarely")
02:01:00 <gamozo> It is crazy to me how expensive cache coherency is. The fact that my OS effectively puts all cache lines in their optimal states really actually seems to matter
02:02:00 <Griwes> my main gripe with shared memory is that I cannot really pass kernel resources through it, so if I want zero copy for buffers and whatnot, I cannot not touch the kernel
02:02:00 <Griwes> oh yeah cache coherency is... quite the thing
02:02:00 <heat> what kernel resources?
02:02:00 <gamozo> Yeah, I think if I made a GPOS (and somehow could dictate the user-land experience). Pretty much everything would be in shared-memory
02:02:00 <Griwes> I mean in case of trying to pass buffers around, VMOs
02:03:00 <gamozo> I think CPUs should have a small range set (some MSRs) that allow a kernel to give a user-land application access to raw virtual memory and paging
02:03:00 <Griwes> my object model is that a kernel object identifier (a "token") is a per-process thing, it's only valid from within that process, so I can't just pass it, it has to be translated by the kernel for consumption by the other side of IPC
02:03:00 <heat> you could map them
02:04:00 <Griwes> and to map them
02:04:00 <Griwes> someone has to create it
02:04:00 <Griwes> and then pass a token elsewhere ;>
02:04:00 <heat> the kernel would implicitly map the VMO and maybe queue the handle in some socket
02:04:00 <heat> that could save you the trip
02:04:00 <Griwes> also the VMM is outside of the kernel :P
02:04:00 <heat> shizzle
02:05:00 <Griwes> :'D
02:05:00 <heat> how does that work?
02:05:00 <heat> does every privileged op go to the kernel?
02:05:00 <heat> invlpg, cr3 loading, etc
02:06:00 <Griwes> I should s/is/will be/
02:06:00 <Griwes> I don't mean all VM management
02:06:00 <Griwes> vasmgr is probably a better term
02:06:00 <Griwes> the kernel still does all the mapping and invalidation and whatnot
02:07:00 <gamozo> Alright the real question. Do you use GOTs or do you map all shared objects at the same address in all processes :D
02:07:00 <Griwes> but it doesn't manage the address space itself
02:08:00 <Griwes> eventually having ASLR is a must
02:08:00 <gamozo> Honestly, it's a bit naughty, but I like ASLR just for debugging because it really stresses that you correctly are moving everything around and not doing fixed addresses
02:08:00 <gamozo> even if not for the mitigation, I think it keeps me honest
02:09:00 <heat> what, ASLR is horrific for debugging
02:09:00 <heat> 1) notice it always crashes/misbehaves on address X
02:09:00 <heat> 2) reboot the OS
02:09:00 <Griwes> it's great for stress testing
02:09:00 <heat> 3) different address
02:09:00 <heat> fuck
02:10:00 <Griwes> awful for actually debugging
02:10:00 <heat> what are you stress testing with aslr?
02:10:00 <Griwes> well, at least your elf loader
02:10:00 <gamozo> Ahaha, yeah I know what you mean for horrible for debugging. Mainly mean it keeps my code better. There's so many situations where code relies on some weird undefined behavior and the same address makes it ok
02:11:00 <heat> it's not your elf loader that does ASLR
02:11:00 <Griwes> I didn't say it is
02:11:00 <Griwes> but if you run it always with the same address bases, you may miss a bug that happens with offsets being slightly different
02:11:00 <heat> also, the worst of all the ASLR issues: "haha your toolchain is default PIE and now every program's base is constantly changing. crashing? oh no!"
02:12:00 * Griwes 's toolchain is forcibly PIE
02:12:00 <heat> and yes my toolchain is default PIE and default SSP strong 😎
02:13:00 <heat> forcibly PIE :(
02:13:00 <Griwes> I'll get to stack protectors eventually
02:13:00 <gamozo> I don't do shared objects so it's all easy for me :D
02:13:00 <gamozo> as long as my bootloader can put my kernel at a random address and do fixups I'm all good!
02:16:00 <gamozo> I normally have an "ASR" mode for my system where I can boot it up and all allocations are completely randomly placed in VA space
02:16:00 <gamozo> But it makes mapping fairly slow as you have such a weird page table layout
02:16:00 <gamozo> Good for testing that everything can be moved around and nothing is accidentally relying on a fixed address or something
02:17:00 <heat> i think fuchsia does something close to that
02:17:00 <gamozo> It's honestly not too bad, but a lot of kernels use linked lists for virtual memory regions (for some reason???), so it often isn't super great
02:17:00 <gamozo> I made patches for FreeBSD at one point to add it and the perf hit was massive as I think mmap() was O(N) WRT virtual memory regions in a process
02:18:00 <gamozo> so after a bunch of regions, you start getting extremely expensive allocations
02:18:00 <heat> linked lists?????????????
02:18:00 <gamozo> it's a solvable problem if you just use a page-table model for memory maps (or just a graph-based structure) in your kernel
02:18:00 <gamozo> Although graphs are then bad for low N
02:18:00 <heat> the standard is a red black tree or an avl tree
02:19:00 <heat> any kind of self-balancing binary tree
02:19:00 <gamozo> for maybe core structures, but _something_ in the loop had it
02:19:00 <gamozo> someone at some layer of the stack tacked on an O(N) structure
02:19:00 <gamozo> (or something enumerated the tables or something, when it maybe didn't have to)
02:19:00 <Griwes> that's... so bad
02:20:00 <gamozo> I mean, Linux and Windows both haev _very_ bad mmap scaling with regions
02:20:00 <gamozo> is this not like, a thing that people complain about? I definitely get mad about it
02:20:00 <Griwes> every day I find so many new and exciting reasons for why no software should ever work
02:20:00 <gamozo> VMMs scale terribly on Linux and Windows with cores as well as with regions
02:20:00 <gamozo> they work fine with # bytes, but it's # regions that it's bad
02:20:00 <gamozo> Getting my Xeon Phi was the best decision of my life
02:21:00 <gamozo> It is the true test of scalability of stuff, and holy shit I can't even use Linux on it
02:21:00 <Clockface> is DOS/16 bit x86 still used in new embedded systems?
02:21:00 <gamozo> You get this 1.3 GHz clocked atoms, that are maybe equiv to 200-500 MHz modern-Xeons, and a boatload of them. If your code doesn't scale well it's painfully apparent. I _love_ it, but I'm also a perf masochist
02:22:00 <heat> Clockface, I don't think so
02:22:00 <gamozo> Clockface: Jeez. I'm sure in some legacy places (eg. new products running old company code that "already works"). But I think most things are UEFI
02:22:00 <Griwes> yeah I've heard phi perf horror stories from HPC people
02:22:00 <heat> embedded x86 is pretty niche, and DOS would beat the point of having a new system
02:22:00 <gamozo> Embedded seems to love UEFI
02:22:00 <gamozo> Honestly, the Phi is fucking fantastic
02:22:00 <heat> really? UEFI?
02:23:00 <Clockface> are there any single-tasking operating systems that have absolutely no protection, like DOS was?
02:23:00 <gamozo> But you have to write code very carefully for it. It definitely requires an extremely specialized dev
02:23:00 <Clockface> or is UEFI enough to be a DOS replacement
02:23:00 <gamozo> heat: Idk, I feel like most people have been doing UEFI at this point. Maybe it's my sampling of reality.
02:23:00 <gamozo> The ecosystem around it is great open-source
02:24:00 <heat> Clockface, UEFI is enough to be more than a DOS replacement
02:24:00 <heat> gamozo, yes, but I wouldn't do embedded on it
02:24:00 <gamozo> Like, it's way easier to get a UEFI build of some open-source bios working before you get some massive BIOS vendor to get you a custom BIOS for your chip
02:24:00 <gamozo> oh yeah, I wouldn't do it on it, but it's what I see
02:24:00 <heat> if I wanted to do embedded, I would use something like lk
02:24:00 <heat> lightweight and has interrupts, threads
02:24:00 <gamozo> Yeah, I think that's why all the bootloaders fork lk
02:24:00 <heat> all you ever need really
02:25:00 <gamozo> The problem is, most companies making embedded devices don't have devs who are really that broad with their skillsets
02:25:00 <heat> the two major limitations of UEFI are that it has no threads and no interrupts
02:25:00 <heat> sure but UEFI is pretty niche
02:25:00 <gamozo> Yeah, but even as just a boot environment it's so much better than a BIOS
02:25:00 <heat> if I had a dev that knew UEFI I would let them work on bootloader/firmware stuff
02:25:00 <heat> not "hey look, we have this embedded app, make it run on UEFI kthxbye"
02:26:00 <gamozo> really? Idk, I think UEFI is just the onl thing that really is standardized for boot across arches, and so I feel like it's everywhere now
02:27:00 <heat> sure, it's "everywhere"
02:27:00 <heat> but how many people have written kernel or firmware code?
02:27:00 <heat> also in a "standard C app" sense, lk is way closer to it than UEFI
02:27:00 <gamozo> Tbh, honestly I think more with UEFI
02:27:00 <gamozo> Like, pretty much all the stuff in the phone world are UEFI
02:28:00 * heat laughs in GUIDs
02:28:00 <heat> gamozo, huh, really?
02:28:00 <heat> I thought it was all device tree?
02:28:00 <gamozo> and switches, and routers, and pretty much all of my devices that have "new" embedded code written
02:28:00 <gamozo> I'm thinking x86 devices, which are a bit weirder lol
02:28:00 <heat> well, sure
02:28:00 <gamozo> but like, all the modern hardware I have is starting to get UEFI-only
02:29:00 <Clockface> UEFI has a premade C library, which cool
02:29:00 <heat> those need to be UEFI or some custom device tree thing
02:29:00 <gamozo> which means that there's a massive market of people doing UEFI dev for all these devices
02:29:00 <heat> Clockface, yes and no
02:29:00 <heat> edk2-libc is a thing, but it's not part of UEFI, just edk2
02:29:00 <gamozo> Don't get me wrong, not saying that UEFI is really that great or anything. I genuniely do think it has lowered the difficulty in writing code at that level, and I think it's led to a lot of it being written/produced
02:29:00 <heat> UEFI does give you a lot of interfaces but none of those are standard C library stuff
02:30:00 <heat> have you actually looked at what they're doing or are you just guessing?
02:30:00 <Clockface> it isnt stdc but being complient with that is only important if you want to run normal programs in UEFI
02:30:00 <heat> because I don't see the point in writing custom UEFI code
02:30:00 <Clockface> which i dont think anyone wants to do
02:30:00 <heat> wrong
02:30:00 <heat> edk2-libc has a python port!
02:31:00 <gamozo> Nah, this is what a lot of things are doing. I'm fairly familiar with a handful of stuff at this level (like my networking equipment, bunch of modern embedded stuff)
02:31:00 <gamozo> I lift a lot of chips off boards
02:31:00 <heat> well, yuck
02:31:00 <heat> UEFI isn't that suited for stuff like that
02:31:00 <gamozo> The main thing is just the cost of ram/compute to do UEFI + maybe a barebones Linux is getting so low (eg. chips that can handle a workload that large)
02:32:00 <gamozo> Like I think my friends MMO mouse he was saying, ran Linux
02:32:00 <heat> well yes, but barebones linux doesn't require UEFI code
02:32:00 <gamozo> after he waas trying to mod it and looked into the firmware
02:32:00 <Clockface> why does a mouse need linux
02:32:00 <gamozo> oh, of course
02:32:00 <gamozo> but the thing is, it's just like, the "easy" ecosystem now
02:32:00 <gamozo> pretty much anyone can get UEFI (open) + Linux (open) + a relatively standard arm-chip
02:33:00 <gamozo> and now you can hire a normal user-land dev to do your whole stack (at the cost of a slightly more expensive chip)
02:33:00 <Clockface> i know a guy who still makes all his products with 8 bit motorolla 6809's or something, he has basic user interfaces with keypads and little displays too
02:33:00 <heat> UEFI isn't that open
02:33:00 <gamozo> It really isn't, I agree
02:33:00 <heat> you can't actually "just run arm" on it
02:33:00 <gamozo> Look, I'm just saying this as an observer of weird stuff. I personally think we are extremely wasteful with the way we write code
02:33:00 <heat> well, you can, but there are not a lot of options
02:33:00 <gamozo> I honestly kinda say this more in a state of shock
02:34:00 <gamozo> Like the Freedom Phone I ordered to reverse, turns out the entire flash is unlocked
02:34:00 <heat> i would love to look at what you're talking about, that looks absolutely disgusting
02:34:00 <gamozo> so it effectively offers 0 security
02:34:00 <Clockface> well, maybe what he does doesnt justify more than that, but he does seem to push things further than people do with the little 8 bit chips normally, which i admire
02:34:00 <heat> the freedom phone haha
02:35:00 <gamozo> It's fantastic
02:35:00 <gamozo> I thought I got scammed
02:35:00 <heat> you did
02:35:00 <gamozo> I opened another bank acount to wire them the money cause I did not trust buying it
02:35:00 <gamozo> I mean
02:35:00 <gamozo> I got a gem of a phone
02:35:00 <gamozo> I will protect this with my life
02:35:00 <heat> it's priceless
02:35:00 <gamozo> I can flash any part of the phone
02:35:00 <gamozo> it's a piece of history
02:35:00 <gamozo> I can flash literally stage 1 bootloader
02:35:00 <gamozo> I have a factory dev phone with a common mediatek chip
02:35:00 <gamozo> I'm very happy
02:36:00 <gamozo> and mediatek has a baked in true-rom for recovery that I can talk to
02:36:00 <gamozo> So I every _byte_ of flash is up for grabs and for me to run at any level of the boot process
02:36:00 <gamozo> To me, this is super fun, I can reverse (and patch and play around with) all stages of the phone
02:37:00 <gamozo> I like getting code running in very unique places :D It's like golf
02:38:00 <gamozo> That being said, my embedded experience is probably on the higher-end of the cost spectrum (more modern things, networking stuff, etc). But like the whoel Microsoft SONiC stack is huge on switches now
02:38:00 <gamozo> which is UEFI + ONIe + SONiC
02:38:00 <gamozo> I _feel_ like UEFI has made embedded dev more reachable to people who maybe don't have access to systems-level devs
02:38:00 <heat> that's horrific, why would anyone need UEFI on that
02:39:00 <heat> i bet they just chainload a kernel
02:39:00 <gamozo> Because you can hire linux devs for cheaper than your firmware devs *cough*
02:39:00 <gamozo> *cough*
02:39:00 <gamozo> I hate it
02:39:00 <gamozo> I really do
02:39:00 <gamozo> But it's true
02:39:00 <gamozo> SONiC is actually just Ubuntu
02:40:00 <gamozo> yeah, my switch, runs UBUNTU
02:40:00 <gamozo> debs and all
02:40:00 <heat> well, depends on the switch I guess
02:40:00 <Clockface> the guy i know says he cant figure out C so he only uses assembly, that man is a living legend, i am in wonder of his computer habits
02:40:00 <heat> if it runs ubuntu, that's fine
02:40:00 <Clockface> he runs a mac, windows XP, and a dos machine for some of his programs
02:40:00 <Clockface> *he had the dos machine for a really long time
02:41:00 <heat> gamozo, doing stuff in userspace is actually a pretty good idea
02:41:00 <gamozo> (tbh, I think RAM and flash have gotten cheap enough that the cost of the dev is more expensive)
02:41:00 <gamozo> For sure
02:41:00 <heat> i probably wouldn't pick ubuntu but whatever floats their boat
02:41:00 <heat> i'd probably pick alpine
02:41:00 <heat> or debian?
02:42:00 <gamozo> haha yeah, it's a weird pick. It could be debian but the version numbers felt more Ubuntu
02:42:00 <gamozo> https://github.com/sonic-net/SONiC
02:42:00 <bslsk05> ​sonic-net/SONiC - Landing page for Software for Open Networking in the Cloud (SONiC) - https://sonic-net.github.io/SONiC/ (804 forks/1435 stargazers)
02:42:00 <gamozo> personally, I think it's an extremely sloppy mess
02:42:00 <gamozo> BUT
02:42:00 <gamozo> People love it!
02:42:00 <gamozo> I absolutely hate configuring on it
02:43:00 <gamozo> It regularly breaks, a lot of commands do _not_ work
02:43:00 <gamozo> Like literally python backtraces (not errors, just broken scripts)
02:43:00 <heat> it looks fine
02:43:00 <heat> maybe they assembled it wrong but it looks totally fine
02:43:00 <heat> they even use zebra
02:44:00 <heat> *why is it running everything in a docker**
02:44:00 <gamozo> Ahahaha
02:44:00 <gamozo> "it's easy"
02:45:00 <heat> well, I mean
02:45:00 <heat> this is totally not UEFI code
02:45:00 <gamozo> So
02:46:00 <heat> literally just GRUB 2.0 that boots linux
02:46:00 <heat> you could probably skip a step and use the efi stub
02:46:00 <gamozo> https://github.com/opencomputeproject/onie
02:46:00 <bslsk05> ​opencomputeproject/onie - Open Network Install Environment (335 forks/472 stargazers/NOASSERTION)
02:46:00 <gamozo> This is the install environment that most switches ship with that SONiC installs over
02:47:00 <gamozo> SONiC is effectively a tarball that you give to onie which is unpacked onto your system
02:47:00 <gamozo> In theory you could put it anywhere
02:47:00 <gamozo> but the ecosystem is combined with onie
02:47:00 <gamozo> https://www.supermicro.com/datasheet/datasheet_SMCI_Networking-ONIE-based_switches.pdf
02:47:00 <gamozo> Being "onie" is actually like, a big deal right now in the switch world
02:48:00 <gamozo> Lots of vendors will advertise they're ONIE-based
02:48:00 <gamozo> I don't know if ONIE _requires_ or just supports UEFI, but I've only seen it with UEFI
02:48:00 <gamozo> It's honsetly a new ecosystem to me
02:48:00 <heat> well if this is the "embedded uses UEFI everything is bad" you were talking about, we're fine
02:49:00 <heat> i was actually thinking they ran routing on top of UEFI
02:49:00 <heat> defo not the greatest idea
02:49:00 <gamozo> OHHH
02:49:00 <gamozo> yeah
02:49:00 <gamozo> sorry didn't mean it as uefi application
02:49:00 <gamozo> my b
02:49:00 <gamozo> that's how I read the initial question
02:49:00 <heat> they were talking about using UEFI as DOS
02:50:00 <gamozo> Yeah, I guess DOS usually just isn't a thing on UEFI systems was the idea in my head
02:51:00 <heat> UEFI works as a DOS replacement
02:51:00 <gamozo> Idk the last time I saw DOS tbh, in a new device
02:52:00 <Clockface> would it still save money to use tiny 8/16 bit devices if they are making a huge amount of the product even if it needs more development
02:52:00 <heat> no
02:52:00 <gamozo> The hard part might be justifying the workforce that can work on that stuff
02:52:00 <heat> a 32-bit arm controller is like 2 euro
02:53:00 <GreaseMonkey> you'd have to consider, say, one of the 3 cent padauk microcontrollers if you're really trying to save money
02:53:00 <gamozo> Now you have to hire a firmware dev at your Furby factory :D
02:53:00 <Clockface> yes but what if you never update the product
02:54:00 <Clockface> if its done and it works, you dont have to pay anyone
02:54:00 <gamozo> Then you can't advertise it as cloud and offer a subscription to your microwave ads!
02:54:00 <Clockface> good point
02:54:00 <gamozo> If it doesn't run Linux, it doesn't run Chrome, and if it doesn't run Chrome it doesn't run Electron, and if it doesn't run electron, you're not selling ads
02:54:00 <Clockface> it needs to be part of web 3.0.1 prerelease-beta
02:54:00 <gamozo> or writing your UI in javascript
02:55:00 <gamozo> Shit even the WIndows UI is in javascript at this point *grumble grumble*
02:55:00 <Clockface> this is why i dont want to go into programming
02:55:00 <gamozo> Like aren't the new fridges and stuff serving touch screens + web UX?
02:55:00 <gamozo> Loooool
02:56:00 <gamozo> I just love complaining, but I am very disappointed that after a lot of moores law, my omputer is less responsive than it was in 2005
02:56:00 <klange> The unfortunate fate of webOS...
02:56:00 <geist> sick burn
02:57:00 <Clockface> some technologies are smarter than the societies that use them
02:57:00 <Clockface> i guess modern semiconductors are one of those
02:57:00 <Clockface> lol
02:57:00 <heat> love the doomer vibes tonight
02:57:00 <klange> Coulda been a contendah in the mobile space, RIP Palm. Now powering TVs and refrigerators at LG...
02:57:00 <Clockface> imagine how cheap stuff could be if they dident bloat up
02:57:00 <gamozo> I gotta get my doomer under control, my bad
02:58:00 <geist> yah and bummer that most of what i wrote for webos LG didn't take up
02:58:00 <geist> except maybe novacom
02:58:00 <geist> but they didn't continue to use bootie the bootloader
02:58:00 <geist> RIP
02:58:00 <gamozo> Bootie the Bootloader? Is that the name?
02:59:00 <geist> that is the name
02:59:00 <gamozo> Aha, that's great.
02:59:00 <geist> was a darn nifty bootloader if i dont say so myself
02:59:00 <gamozo> Was there a mascot/logo?
03:00:00 <geist> alas no
03:00:00 <geist> though we had a logo for Trenchcoat, the flashing tool
03:01:00 <gamozo> That's great. I've been commissoning art recently when I come up with projects, it's just a fun process
03:03:00 <gamozo> I see LK everywhere at least :D
03:03:00 <gamozo> At least a lot of bootloaders I open to reverse
03:05:00 <mxshift> As much as UEFI is horrible, booting modern x86 from the reset vector is a minefield of footguns
03:06:00 <heat> UEFI also boots from the reset vector
03:06:00 <heat> and it's not horrible
03:07:00 <heat> unless you dream about calling bios interrupts that you read about on a crappy .txt
03:08:00 <gamozo> http://helppc.netcore2k.net/
03:08:00 <bslsk05> ​helppc.netcore2k.net: Welcome to HelpPC 2.10 Quick Reference Utility :: HelpPC 2.10 - Quick Reference Utility :: NetCore2K.net
03:08:00 <gamozo> my friend still has nosted this to this day
03:08:00 <gamozo> ahahaha
03:08:00 <gamozo> hosted*
03:08:00 <gamozo> I love it so much
03:09:00 <gamozo> Anyone doing RISC-V stuff? I've only done userspace stuff
03:10:00 <heat> yes i've done that
03:12:00 <mxshift> I mean UEFI does a lot of work for you that is difficult to replicate.
03:13:00 <mxshift> Work is going straight into Rust at the x86 reset vector and then doing all the init ourselves.
03:14:00 <gamozo> hnggg
03:15:00 <Clockface> were you reffering to Rust lang or were you using a metaphore
03:16:00 <mxshift> Rust lang
03:17:00 <Clockface> whats rust better at than C
03:17:00 <Clockface> i never looked at it much
03:18:00 <gamozo> Personally, it's a much stricter language and requires you to explain more clearly what your intentions are to the compiler. As a reward from this, the compiler can give you better optimizations, safety guarantees, etc
03:18:00 <heat> everything
03:18:00 <gamozo> The more you communicate with the compiler, the more the compiler can reason about your code
03:19:00 <heat> except compile time
03:19:00 <gamozo> And that's big for many reasons
03:19:00 <heat> compile time sucks
03:19:00 <Clockface> i found C too strict for my abominations so i program in assembly as much as i can
03:19:00 <gamozo> I've only ever had compile time issues on third party things and I don't really know why. My OS builds in like 2 seconds, including the bootloader
03:19:00 <heat> yikes
03:19:00 <gamozo> But some deps take ~60+ seconds, I don't know how?
03:19:00 <heat> might depend on your code
03:19:00 <gamozo> I think a lot is procmacros
03:20:00 <heat> at least in C++ you can have relatively fast compiles and then the late stage C++ that is LLVM
03:20:00 <gamozo> And probably templating, which I honestly use fairly heavily, so maybe not
03:20:00 <heat> also, LTO
03:20:00 <gamozo> LTO is hot hot hotr
03:20:00 <gamozo> So good for code size
03:21:00 <Clockface> my macro language has been doing suprisingly ok, i have versions of it for x86-64 linux and 16 bit BIOS, the 16 bit DOS binaries have a boot sectore but can also be loaded by DOS
03:21:00 <Clockface> macro languages arent so bad actually
03:21:00 <Clockface> it went ok
03:22:00 <Clockface> probably best as a way to boost productivity when writing assembly rather than as a full language
03:23:00 <Clockface> and the one i made is pretty bad, but i like where its going
03:23:00 <heat> if you get an old C compiler it probably feels like using assembly macros
03:25:00 <Clockface> the thing i absolutely despise about what i have created are the conditional loops
03:26:00 <klange> I ported my bytecode VM language to run bare-metal protected mode in a state that can easily jump back to BIOS, and also to EFI. I was thinking of writing an overly complicated scriptable bootloader with it. Like GRUB!
03:26:00 <mxshift> Hubris builds are mostly spent building the various PAC crates which are register accessors generated from XML files
03:26:00 <mxshift> Stuff that generates a lot of Rust code in a single crate really adds up in build times
03:27:00 <Clockface> it doesnt evaluate if the condition is true until it reaches CONTINUE, which i have found plenty of ways around, but its aweful
03:27:00 <heat> klange, do it
03:27:00 <Clockface> and once i change that ill like it more
03:27:00 <Clockface> IF works fine, but while is almost unusable imo
03:29:00 <Clockface> it compiles to position indipendent code
03:31:00 <Clockface> it was fun
03:31:00 <Clockface> eventually it will be viable and have a proper compiler
03:31:00 <Clockface> later
03:31:00 <Clockface> when frogs rain from the heavens
03:33:00 <klange> Hm, where did I put that protected mode one... ugh is it in a branch of ToaruOS itself...
03:33:00 <sonny> mxshift: do you know how rust userspace drivers work?
03:33:00 <klange> The EFI one is on Github, works great as an "EFI app", especially if you have an EFI shell.
03:33:00 <sonny> rather, eli5 if possible
03:33:00 <sonny> s/rust/hubris
03:34:00 <klange> Even has most of the REPL functionality, though its syntax highlighting color options are limited as it uses the normal text output APIs and despite definitely always running in a graphics mode, those don't offer rich color options like a dedicated terminal emulator would...
03:34:00 <No_File> Clockface they alrdy do https://imgur.com/oLyuuiW
03:34:00 <bslsk05> ​'TempleOS' by [idk] (--:--:--)
03:35:00 <mxshift> Yes. What aspect of hubris drivers are you thinking about?
03:35:00 <sonny> I heard that they are in user space?
03:35:00 <sonny> via shared memory somehow?
03:36:00 <klange> Still one of the most amusingly pointless projects I've worked on. https://klange.dev/s/Screenshot%20from%202022-05-18%2012-36-01.png
03:36:00 <mxshift> Reference docs at https://hubris.oxide.computer/ are probably the best place to start
03:36:00 <bslsk05> ​hubris.oxide.computer: Hubris
03:36:00 <sonny> ok
03:37:00 <mxshift> There is no shared memory in hubris. Only message passing with a lease mechanism
03:38:00 <sonny> interesting
03:39:00 <mxshift> Technically you can abuse the peripheral mappings to allow two tasks access to an address range but you shouldn't ever need to
03:39:00 <Clockface> we live in an age where chromium may rust
03:39:00 <Clockface> :)
03:40:00 <Clockface> they dont have rust in it yet though, i know firefox is messing with it
03:41:00 <klange> Firefox has been "messing with it" for quite some time.
03:41:00 <klange> Mozilla does a lot of Rust.
03:42:00 <mxshift> Servo feels far beyond "messing with it"
03:42:00 <gamozo> They're dabbling
03:42:00 <klange> Testing the water. Getting their feet wet.
03:44:00 <heat> https://upload.wikimedia.org/wikipedia/commons/8/8a/Discover_Scuba_Diving_--_St._Croix%2C_US_Virgin_Islands.jpg aww geez you know, at firefox we're just getting our feet wet in rust, testing the waters
03:47:00 <Clockface> well they havent started breathing the water yet
03:47:00 <Clockface> according to the picture
03:47:00 <heat> why the fuck would you breathe the water
03:47:00 <heat> you would just die
03:47:00 <heat> do you want firefox to die?
03:47:00 <sonny> lol
03:48:00 <Clockface> when i was a little kid i tried to breath water
03:48:00 <Clockface> i regretted it every time, but i had a twinge of hope each time that i would do it and retain the ability
06:38:00 <mats1> never give up your dreams
06:39:00 <mats1> we believe in you
06:41:00 <kingoffrance> ^ just transmute the water to air and breathe away
06:44:00 <No_File> transmute penis into non penis
08:21:00 <mrvn> kingoffrance: but the hydrogen makes my voice sound all funny
08:32:00 * moon-child lights a match
10:44:00 <sikkiladho> how to do you keep the secondary cpus idle until primary cpu requests them? send them in a loop with a condition on a volatile int, and primary cpu sets the int to zero when it needs?
10:44:00 <zid> halt
10:44:00 <zid> send them an interrupt to wake them up
10:46:00 <clever> wfi or wfe opcode is what i would do
10:46:00 <zid> I'd do 'hlt'
10:46:00 <zid> but you're probably talking about some bizzare cpu nobody uses
10:46:00 <clever> zid: is hlt valid on arm?
10:47:00 <zid> he never said arm
10:47:00 <zid> you just have it on the brain
10:47:00 <klange> it's a bit lower than the brain, and most people have two of them, but
10:48:00 <zid> those are called testicles
10:48:00 <clever> zid: sikkiladho has been working on an arm hypervisor for months now
10:48:00 <zid> still doesn't mean you can insinuate
10:51:00 <sikkiladho> I'm using arm64, I think clever knows about my project more than me maybe. XD
10:53:00 <sikkiladho> clever: i have secondary cpus in control of the hypervisor, I send them in a loop. linux makes the smc, hyp traps it. How would I pass the secondary cpus to kernel's requested addres?
10:54:00 <clever> using an IPI
10:54:00 <clever> you need to look into how SMP is done on arm, and how to pass messages between cores
10:55:00 <sikkiladho> that's great. Thank you. Will look into it.
10:55:00 <clever> then the hypervisor on core0 can send a message to the hypervisor on core1
10:55:00 <clever> and core1 can then execute the linux entry-point, as directed by the PSCI
15:02:00 <No_File> https://www.youtube.com/watch?v=LcafzHL8iBQ now we know why we using TempleOS
15:02:00 <bslsk05> ​'Windows 11 Must Be Stopped - A Veteran PC Repair Shop Owner's Dire Warning - Jody Bruchon' by Jody Bruchon (00:18:56)
15:04:00 <heat> "This is how Microsoft will make every piece of computing hardware submit to their whims within the next decade." bullshit incorporated
15:07:00 <heat> 1) linux users don't need to have their bootloaders signed by microsoft, they just need to sign them with their own keys
15:07:00 <heat> whether it's a microsoft one or not doesn't matter
15:10:00 <heat> WHAT
15:10:00 <heat> he's saying disk encryption is bad because you can't recover data from linux
15:13:00 <heat> naww this is so bullshit
15:14:00 <gog> i will never use windows 11
15:17:00 <GeDaMo> The last version of Windows I installed on my machine was 3.1 :P
15:17:00 <GeDaMo> Hmmm ... I might have installed NT 3.51 at some point to have a look
15:18:00 <heat> that vid is basically a fearmongering paranoid take on windows 11 and UEFI + TPM by someone that doesn't fully understand things
15:18:00 <heat> I think he goes on a rant about apple being bad as well later in the vid
15:26:00 <zid> I mean, yes, they are all bad, but full disk encryption is to hide your porn
15:26:00 <zid> so either hide your porn or don't
15:40:00 <heat> interesting
15:40:00 <heat> the RCU patents seem to have expired
15:41:00 <mrvn> I though porn is to hide our secrets in.
15:47:00 <heat> actually do I need to respect US patents at all if I'm not in the US lol
15:47:00 <zid> depends if you wanna get extradited and shot
15:48:00 <heat> the US doesn't extradite and shoot europeans homie
15:48:00 <heat> it does for the rest of the world though
15:48:00 <zid> yes they do
15:50:00 <heat> anyway law complicated
15:50:00 <heat> free legal advice where
15:50:00 <zid> we had like a 20 year battle to stop our politicians handing over some autistic kid who ssh'd into a pentagon public machine
15:50:00 <zid> to the US
15:50:00 <heat> ok, but he broke their law
15:51:00 <heat> patents are only valid in the specific country they were issued in
15:51:00 <zid> Oh no, that isn't the law you will have broken when they extradite you
15:51:00 <zid> You'll be found in a hotel in a compromising position having woken up from being drugged
15:51:00 <heat> hahahahaha
15:52:00 <zid> There will be a respected doctor with a negative tox screen to hand too so you just sound crazy
15:53:00 <zid> (This is documented US foreign policy for the past forever)
16:10:00 <heat> https://www.freebsd.org/cgi/man.cgi?query=epoch&sektion=9&format=html this is new
16:10:00 <bslsk05> ​www.freebsd.org: epoch(9)
16:10:00 <heat> like RCU but slightly different
16:24:00 <kingoffrance> law was spirit, legal was form. you get what you pay for :)
16:24:00 <kingoffrance> *formal
16:24:00 <geist> i think you're pretty free to violate whatever patent there is in your code, but you might have trouble using it and/or getting it used later
16:24:00 <geist> so if thats not a concern, then dont worry about it
16:25:00 <kingoffrance> "dont cross the streams" :)
16:27:00 <kingoffrance> also mrvn stole what i thought about typing but did not
16:29:00 <geist> heat: hmm the epoch thing is interesting
16:29:00 <geist> would have to grok that to understand how it's used
16:29:00 <heat> yup
16:29:00 <heat> also trying to figure out https://svnweb.freebsd.org/base?view=revision&revision=357314
16:29:00 <bslsk05> ​svnweb.freebsd.org: [base] Revision 357314
16:30:00 <heat> this is a epoch-like thing that's coupled with the memory allocator
16:31:00 <heat> https://people.freebsd.org/~jeff/smr.pdf <-- unclear what the "ops" are but it's a graph and the lines look nice
16:31:00 <heat> :D
16:42:00 <heat> if_bridge got a 5x improvement from using epoch vs mutexes
16:42:00 <heat> this is big
16:43:00 <heat> mutex -> 3.9 million packets per second; rw locks -> 8 million packets per second; epoch -> 18.6 million packets per second
17:13:00 <mrvn> kingoffrance: doesn't "dont cross the streams" apply to NAT?
17:19:00 <mrvn> epochs sound fragile. "frees must Benjojo deferred until after a grace period haselapsed". So if the system is under heavy load and tasks don't run fast enough the free happens and they crash?
17:19:00 <mrvn> s/Benjojo/be/ stupid tab :)
17:27:00 <heat> that's how RCU works too
17:27:00 <heat> basically you defer frees until a certain point
17:28:00 <heat> i haven't read the epoch stuff very well but the biggest difference is probably the "when"
17:36:00 <mrvn> heat: it looks like it's for kernel threads and the kernel should never be overloaded like that. .oO(Until one day it is)
17:39:00 <kingoffrance> i dont know networking really, but i wager for joe average it was a financial decision
17:39:00 <kingoffrance> i.e. such questions did not enter thought process at all
17:39:00 <kingoffrance> and i imagine for isps as well
17:40:00 <kingoffrance> so ill rest my "you get what you pay for"
17:44:00 <kingoffrance> that implies a tipping point perhaps :)
17:49:00 <mrvn> In networking it's usualy: Deal with this within X ms or you've got an error.
17:51:00 <geist> yah that's the part i've never really grokked with rcus and whatnot, i get that things are delayed but presumably there's a failsafe that makes sure that in the worst case they still are never freed early
17:51:00 <geist> but then in the past i had been told to explicitly not look at RCUs so i really dont know how they work
17:55:00 <No_File> "In networking it's usualy: Deal with this within X ms or you've got an error." Not so in the RS232 interface.
17:57:00 <mrvn> I think in an RCU like wikipedia describes each element would have a reference count. So basically you still lock every item you want to read but the lock can never block.
17:57:00 <geist> yah that's what i assume is at the bottom of it, a lazy ref count
17:58:00 <gamozo> Mornin!
17:58:00 <jimbzy> I am really starting to dig the 6502.
17:58:00 <mrvn> The trick is that you implement COW and update links lock-free. So no write locks.
18:00:00 <mrvn> "3. Wait for a grace period to elapse, so that all previous readers (which might still have pointers to the data structure removed in the prior step) will have completed their RCU read-side critical sections." or not ref counting. That's the part I never got with RCU.
18:01:00 <kingoffrance> jimbzy, i noticed 65c816 IIRC had some kind of "back compat" 6502 mode, i wonder if games used that to "port"
18:02:00 <jimbzy> Yeah, it starts up in 65C02 mode
18:03:00 <jimbzy> Wait I have that backwards I think.
18:03:00 <kingoffrance> that might be what the c is for :)
18:03:00 <kingoffrance> 8 or 16 bit
18:03:00 <jimbzy> Nah the C is for CMOS
18:04:00 <jimbzy> Nah it starts up in emulation mode from what I can tell.
18:12:00 <heat> they don't have reference counts in RCU
18:12:00 <heat> a read operation is, as I understand it, a simple atomic read
18:12:00 <mrvn> heat: so how do you know when you have waited long enough to free?
18:12:00 <heat> but before reading, it disables preemption (this stops RCU from freeing anything)
18:13:00 <heat> a write operation is a simple atomic swap with some extra sauce on it
18:13:00 <mrvn> heat: disabling preemption only means you don't get stopped, not that there is no concurrent access
18:14:00 <heat> ok?
18:14:00 <jimbzy> kingoffrance, I thought about going with the '816, but there was some weirdness in the addressing that I don't fully understand yet.
18:14:00 <heat> never said it did
18:14:00 <mrvn> all of the RCU operations are clear except the "how long to wait before free"
18:14:00 <heat> that's what makes RCU fast
18:14:00 <heat> or linux's RCU, that is
18:15:00 <heat> they have like 3 different variants now
18:15:00 <heat> all slightly different
19:11:00 <gamozo> Alright. Who here did IA-64 dev
19:14:00 <gamozo> I feel like I never quite understood all the hate it got. Mainly just compat with legacy x86?
19:14:00 <sonny> isn't that itanium?
19:14:00 <gamozo> yeah!
19:15:00 <sonny> I thought that was the new arch
19:15:00 <sonny> intel64 is the x86 compatible one
19:15:00 <mrvn> ia64 != amd64
19:15:00 <sonny> from some books I read, it seems that itatium is hard to write compilers for
19:15:00 <gamozo> That's actually what I was thinking
19:15:00 <sonny> s/some books/a book/
19:16:00 <gamozo> It looks like a _great_ architecture as you get really good control over hardware, but that offloads some of the difficulty to the compiler
19:16:00 <sonny> plus other economic factors
19:16:00 <gamozo> but The 2003 ecosystem for compilers was terrible. I'm legit curious if it'd be viable with modern LLVM/extensible compilers
19:16:00 <mrvn> it never was quite stable
19:16:00 <sonny> iirc, windows did a itanium port
19:16:00 <sonny> did linux ever get one?
19:16:00 <mrvn> And the architecture isn't really upgradable. All the dependencies are hardcoded.
19:17:00 <sonny> gamozo: they had gcc
19:17:00 <sonny> doesn't sound that bad
19:17:00 <No_File> whats so special about itanium?
19:17:00 <gamozo> sonny: Yeah, but _2003_ gcc. Honestly kinda all compilers I feel were pretty terrible until maybe like 2008-2010?
19:17:00 <gamozo> Largely just due to perf constraints of optimization passes/memory limits on data structures
19:17:00 <sonny> why? lol
19:18:00 <sonny> people have been writing fast compilers for years
19:18:00 <gamozo> Compilers just scale super well with memory and compute, and they've gotten probably 20-30x faster since then
19:18:00 <sonny> eh
19:18:00 <sonny> I don't know how you are making these assumptions
19:19:00 <gamozo> A mix of optimization pass dev and reading codegen from old code
19:19:00 <gamozo> You could tell the difference between a for loop and a while loop at the assembly-generated code level until the early 2000s
19:19:00 <gamozo> Compilers were _very_ literal
19:19:00 <sonny> I think herb stutter "free lunch is over" is from like '03
19:20:00 <sonny> there was lots of proprietary compilers back then I bet
19:20:00 <sonny> instead of defacto best open source stuff
19:20:00 <gamozo> I know VS98 codegen was pretty bad. I think Intel had their compiler out by then?
19:20:00 <gamozo> GCC I think was still an up-and-comer
19:20:00 <sonny> I dunno
19:21:00 <sonny> one of my profs still thinks GCC sucks lmao
19:21:00 <gamozo> Tbh, I think it's an absolute cluster as a codebase, but it has more consistent codegen than LLVM IMO
19:21:00 <gamozo> LLVM will do some really stupid things here and there. GCC often doesn't have the same weird pitfalls
19:21:00 <sonny> I don't recall who he said used to make good compilers
19:22:00 * sonny shrugs
19:22:00 <gamozo> Honestly, I don't think compilers really started to get good until LLVM started bringing some more academic ideas into compilers
19:22:00 <sonny> from the material I've read, that just seems silly
19:22:00 <gamozo> I think a lot of the proprietary ones were pretty legacy, outdated, and really just sold because people depended on them rather than being competitive [citation needed]
19:23:00 <sonny> optimizing compilers have been around
19:23:00 <gamozo> Hmm, what do you mean. I'm also relatively young, so I don't have the greatest first-hand experience other than reading a lot of assembly from old code/games/projects
19:23:00 <gamozo> Of course
19:23:00 <sonny> I'm 24, I don't know what happened back then
19:24:00 <gamozo> So. From my experience, until maybe, ~2005 (VC2005), and GCC around that era. Most things were taken superl iterally from the C -> asm. They were "optimizing compilers" but with such small boundaries on what they could reason with
19:25:00 <gamozo> I wonder if godbolt has old compilers...
19:25:00 <sonny> well, I can't argue with your experience, I have not looked into gcc or llvm much
19:25:00 <gamozo> 4.1.2 oo
19:25:00 <mrvn> gcc's core design is still always "read memory; op; write memory;" repeat.
19:26:00 <gamozo> Yeah. Memory is _really_ hard to optimize around as it's such a big blob. I feel like only recently have we gotten some good through-memory optimizations
19:28:00 <gamozo> I wonder if I can find some old benchmarks ahaha, would be a good trip down memory lane
19:29:00 <sonny> also considering the (complaints) discussions around ub in gcc, seems to be from that era as well, but the best way would be to test if it's on godbolt
19:30:00 <gamozo> Yeah. I wish godbolt went older than 4.1.2, I think that is actually right about when things started to get pretty good, gonna play around with it a bit!
19:30:00 <zid> It has sorta just scaled with how good desktops are
19:31:00 <zid> An average project you might compile on a desktop uses an average desktop amount of RAM to compile, and takes about as much time as you'd be willing to wait for it to do so :P
19:31:00 <gamozo> That's kinda what I've thought. Lots of O(n^2) things in compiler theory that kinda require you to make approximate shortcuts (eg. stop optimizing past a barrier) that now we can go deeper
19:31:00 <mrvn> lots of O(e^n) things i c++
19:31:00 <sonny> can't you just do multiple passes
19:32:00 <zid> no it has extreme local minima issues
19:32:00 * kingoffrance throws in https://mail.haskell.org/pipermail/haskell/2003-November/013104.html as a data point and disappears into the shadows
19:32:00 <bslsk05> ​mail.haskell.org: ANN: CMI 1.0.0 - a cross-module inliner for C - written in Haskell
19:32:00 <zid> (or local maxima, depending which way your y axis goes)
19:32:00 <gamozo> Oooh perfect
19:32:00 <gamozo> https://godbolt.org/z/chq8zc74T
19:33:00 <bslsk05> ​godbolt.org: Compiler Explorer
19:33:00 <mrvn> the resuction limit for templates gone from 14 to 987 or so.
19:33:00 <gamozo> Here's an example of 4.1.2 vs 12.1 with a _very_ simple, constant proppable code
19:33:00 <zid> my favourite optimization I ever saw modern gcc make
19:33:00 <gamozo> 4.1.2 cannot constprop this (likely either it can't prop loops at all, or it can't prop to this depth, of 10)
19:33:00 <zid> was to turn a huge set of functions for generating some start-up test data
19:33:00 <sonny> wow that's kinda scary
19:33:00 <zid> into.. the exact test-data in an array
19:33:00 <zid> and 0 code
19:34:00 <mrvn> zid: that's what constinit is for
19:34:00 <mrvn> https://godbolt.org/z/a5h6TE9f7
19:34:00 <bslsk05> ​godbolt.org: Compiler Explorer
19:35:00 <sonny> gamozo: you get the same result on version 4.1.2 with -O3
19:35:00 <mrvn> with constexpr the compiler will give up at some point and init at runtime. with const I think even earlier.
19:35:00 <gamozo> Oooh you do! That's cool
19:37:00 <sonny> I guess you guys don't compile with optimizations?
19:37:00 <gamozo> I wish I could go back beyond 4.1.2 :D
19:37:00 <sonny> perhaps some linux or bsd distro has an old version
19:37:00 <sonny> in order to bootstrap
19:38:00 <Griwes> just fire up haiku and you'll have some version of 2.95
19:38:00 <gamozo> :D
19:38:00 <zid> https://godbolt.org/z/3rMbEKbWo It was something like that basically
19:38:00 <bslsk05> ​godbolt.org: Compiler Explorer
19:38:00 <zid> caught me by surprise
19:40:00 <zid> It can't even really just constant propagate that, it just sorta has to run it and see if it has any unavoidable side-effects it can precompute
19:41:00 <gamozo> Like being able to poptimize this seems to be GCC 4.6.4 (with -O3 this time!) https://godbolt.org/z/8q4csW1aT
19:41:00 <bslsk05> ​godbolt.org: Compiler Explorer
19:41:00 <gamozo> There was a lot of loop stuff really starting to happen around this era of mid-to-late 2010s
19:42:00 <zid> I think it just comes down to it being willing to do all its regular optimizations AFTER unrolling the loop
19:42:00 <zid> even if the loop seems at first glance to otherwise be too big to unroll
19:42:00 <zid> It's "searching" for optimizations, now
19:42:00 <gamozo> Yeah. I think the big thing that started to happen at this time was symbolic expressions being used to reduce complexity of code such that it can optimize ti better
19:43:00 <zid> 9/10 that optimization will be pointless and unrolling it to see if it can re-roll it tighter won't work
19:43:00 <gamozo> Yup
19:43:00 <zid> but that's what -O3 is for now we have 5GHz desktop cpus imo :P
19:44:00 <gamozo> I think a lot of modern compilation techniques require locally unrolling and simplifying. Eg. if you can collapse an inside loop, now you can unroll the outside loop, since you've simplified out a loop
19:44:00 <zid> I'd love to find some of the tunables in gcc and make a build with them set higher for fun
19:44:00 <gamozo> I know that's been going on forever, but these thresholds just keep going up and up (largely due to compute power, but some great algo design, and just better compiler code!)
19:44:00 <gamozo> I do that a lot with Rust (thus LLVM). I have some code which builds crazy nicely with custom unrolling limits
19:45:00 <gamozo> Intel's compiler had a `#pragma unroll` where you could help inform the compiler on a individual function/loop level
19:45:00 <sonny> I thought clockrate peaked 10 years ago
19:46:00 <sonny> isn't it more common to have a 'burst'
19:46:00 <gamozo> CPUs have still gotten way faster per-core, and we're getting better at using cores during building (which honestly is still pretty rare)
19:46:00 <sonny> 2Ghz seems common
19:46:00 <gamozo> Modern CPUs have no problem doing 4 x86 isntructions/cycle
19:46:00 <zid> clockrate did peak 10 years ago for the most part, 2GHz is not common.
19:46:00 <zid> 2GHz is reserved for cheap netbooks
19:46:00 <gamozo> even if clock rates are about hte same, many cores have gotten 4-15x faster single-threaded due to way faster caches, latencies, better paging structures, etc
19:46:00 <sonny> 2.5Ghz?
19:47:00 <zid> All modern M chips turbo over 4
19:47:00 <gamozo> I'd say 3.2-4.4 GHz probably average desktop now
19:47:00 <sonny> I am talking about the base
19:47:00 <zid> base clock is a lie
19:47:00 <sonny> o.O
19:47:00 <gamozo> Base clock kinda doesn't matter anymore
19:47:00 <sonny> lol
19:47:00 <zid> The turbo goes in increments of 100MHz
19:47:00 <zid> That's the bclk, base clock
19:47:00 <gamozo> Like, it does, but kinda doesn't. It's really all TDP based
19:47:00 <sonny> ok, time to learn about clock rate
19:47:00 <zid> It will pick a multiplier, which is what denotes your model, like mine is 12-45x and 12 is the 'base'
19:47:00 <gamozo> And then you have different clocks based on AVX, AVX-512, it's great
19:48:00 <zid> and 45 is the max turbo
19:48:00 <zid> and whatever thermals it can not cook itself under, it will clock to that
19:48:00 <sonny> different clockrates for the extentions?
19:48:00 <gamozo> yep!
19:48:00 <zid> avx-512 just used so much power the chips throttled
19:48:00 <sonny> how does that work??
19:48:00 <gamozo> Let me get you the good source for seeing it
19:49:00 <zid> If your 300W cpu ran using 300W 24/7 you'd be pissed
19:49:00 <sonny> yeah
19:49:00 <zid> they generally clock down a bunch and shut off parts of the die etc quite aggressively
19:49:00 <sonny> oh damn
19:49:00 <gamozo> You get different clock rates based on the number of cores active as well, dynamically
19:49:00 <gamozo> http://m.manuals.plus/m/e4ce6ae95961f6d890bf4eb042f46f2ceaf602cb3838678adf3e7d4772ea8b82.pdf
19:49:00 <zid> then there's a bunch of settings for how many microseconds it has to wait for the power delivery from the mobo to ramp back up to the 200 amps it will need at 300W etc
19:50:00 <gamozo> Go to like, page 19
19:50:00 <sonny> this is an amd64 thing or is it general?
19:50:00 <gamozo> There are different tables for AVX-512, AVX, and non-AVX
19:50:00 <gamozo> and for different numbers of active cores
19:50:00 <zid> which is why avx-512 was annoying, it used so much extra power your chip would just do.. nothing the first time it saw an avx-512 instruction for a few million cycles
19:50:00 <zid> becuse it was waiting for the voltage regulators to stabilise
19:51:00 <zid> cpus using 1V is nice until they wannt use 300W and you need to deliver 300A to them lol
19:51:00 <gamozo> CPUs are so damn picky
19:51:00 <GeDaMo> https://en.wikichip.org/wiki/intel/xeon_gold/5120#Frequencies
19:51:00 <bslsk05> ​en.wikichip.org: Xeon Gold 5120 - Intel - WikiChip
19:51:00 <mrvn> the example doesn't even need unrolling as such but constant propagation.
19:51:00 <zid> hence the.. thousand pins underneath, 90% are just fucking 1 amp power pins
19:52:00 <sonny> wow
19:52:00 <zid> There are lovely ARM cpus that use like 6 different voltages
19:52:00 <zid> makes the powersupply design on the board <3
19:52:00 <gamozo> We've been minmaxing power a lot more in the past few years, it's kinda fun
19:53:00 <gamozo> Oh, and on top of this all, these are all thermal-based clock rates
19:53:00 <gamozo> Like, my cores clock to X with all cores running + AVX-512 saturating, but I often get 100-200MHz faster
19:53:00 <gamozo> cause I have a cool server room
19:53:00 <zid> I need a new psu still, speaking off
19:54:00 <zid> I swapped to the sister cpu of mine with the better stepping and the 3.3V rail dips to 3V and the mobo turns it off
19:55:00 * sonny cries in 8th gen i3
19:55:00 <mrvn> Looks like only clang grooks a slightly harder example: https://godbolt.org/z/fGY5EGY9a
19:55:00 <bslsk05> ​godbolt.org: Compiler Explorer
19:55:00 <sonny> iirc I saw something recently about gcc and the like being better at auto vectorization
19:56:00 <gamozo> mrvn: Ooh that's a cool example.
19:56:00 <mrvn> note how the older gcc increments %edx while modern gcc decrements %edx
19:57:00 <mrvn> Modern gcc knows the loop is run 10 times.
19:57:00 <gamozo> I love this stuff so much
19:57:00 <gamozo> I shouldn't because it's very distracting
19:58:00 <sonny> what are you working on?
19:58:00 <mrvn> gcc -O3 code looks horrible.
19:58:00 <gamozo> I've had so much fun doing compiler optimization stuff. It's actually something I think most people should get a chance to dabble with
19:58:00 <gamozo> I'
19:58:00 <gamozo> I've had bad experiences with -O3
19:58:00 <gamozo> Huh, with -O3 4.1.2 and 12.1 have the same codegen
19:58:00 <gamozo> that's wild
19:59:00 <mrvn> I would rather had less compiler optimization. Give me code like I wrote it while being smart about it.
19:59:00 <sonny> that just means your code is ub ;)
19:59:00 <gamozo> Nah, -O3 sometimes hurts performance by unrolling too much which can lead to it preventing itself from doing more optimizations
19:59:00 <gamozo> Or just, really bad icache usage
19:59:00 <mrvn> gamozo: almost always
19:59:00 <sonny> ah
20:00:00 <gamozo> Idk how much of that is still accurate, as time goes on I would imagine it matters less and less, but I know historically a lot of people would advise against -O3
20:00:00 <gamozo> That being said, I just write Rust then my compiler can really do some fancy optos! <3 strict aliasing
20:00:00 <mrvn> gamozo: change the code to "ii < 100" and look at the clang output.
20:01:00 <gamozo> Oooh that's beautiful
20:01:00 <gamozo> Keeps it nice and tight, does minimal operations
20:01:00 <mrvn> -O3 used to be unstable, now it's usually just slower.
20:01:00 <gamozo> I don't really see how to make that code faster on the CPU honestly
20:01:00 <gamozo> There's a chance unrolling to 6 would be better though? Not sure
20:02:00 <mrvn> with ii < 100 clang unrolls and then rerolls the loop or something.
20:02:00 <mrvn> gamozo: 100 / 6?
20:02:00 <gamozo> For absolute maximum perf I think doing loops by 6 and then flushing the remainder would be fastest
20:02:00 <mrvn> 6 times would require a jump into the middle of the shifts at the start.
20:03:00 <gamozo> just because I think the 6th is free, idk, it's like a super micro optimization
20:03:00 <mrvn> why 6?
20:03:00 <gamozo> My theory is that it should be able to execute 2 of those shls per cycle, and the branch forces a sync to a cycle boundary. So this is doing 2.5 cycles, and then doing nothing for 0.5, then branching
20:03:00 <gamozo> _I think_ the fact that they hit memory might make that not matter
20:04:00 <gamozo> for register based math that would be true in this case
20:04:00 <gamozo> Would it actually be measurable? Probably not
20:04:00 <mrvn> the register part it eliminated completely because 4 << 100 == 0
20:05:00 <mrvn> gamozo: modern CPU have cycle counters. you can measure very accurately.
20:05:00 <gamozo> I'm familiar, it's just that at this level such a minute amount of system noise (even cache coherency traffic) can overpower measuring something this miniscule
20:05:00 <gamozo> Hmm, I have an example of this benchmark somewhere I think
20:05:00 <gamozo> it's kinda neat
20:13:00 <mrvn> gamozo: I love the paper that shows that what shell variables you have in your ENV makes more of a change in speed than -O2 vs -O3.
20:16:00 <GeDaMo> https://www.youtube.com/watch?v=r-TLSBdHe1A
20:16:00 <bslsk05> ​'"Performance Matters" by Emery Berger' by Strange Loop Conference (00:42:15)
20:16:00 <gamozo> That is a masterpiece and one of the best talks I've ever seen
20:17:00 <gamozo> Legit it should be required watching. It's so accurate to optimization and the worldview my brain has and what has worked for me
20:17:00 <gamozo> it's one of the very few talks that really seems to hit the nail on the head
20:17:00 <gamozo> I do a lot of research in my RTOS for x86 for this exact reason, to eliminate noise
20:17:00 <gamozo> The points in that talk is arguably why I do osdev lol
20:18:00 <sonny> what gives the R the realtime in RTOS?
20:19:00 <gamozo> I think at the highest level, the biggest difference is whether or not pre-emption is in it. Eg. does your task ever get interrupted
20:20:00 <gamozo> Knowing that if you don't yield to the kernel, your process has full control of CPU reseources for as long as you want, seconds, minutes, whatever
20:22:00 <sonny> I see
20:22:00 <sonny> polling IO ftw
20:22:00 <gamozo> Yeah, I do IO polling in my OS and I like it a lot
20:23:00 <mrvn> gamozo: pre-emption != pre-emption. Basically every kernel does pre-emption.
20:23:00 <mrvn> gamozo: the thing that makes a difference is priotization
20:23:00 <sonny> I wonder if for a personal computer it's better to have it dstributted so low latency stuff getts polling IO but like sound and stuff is done elsewhere
20:24:00 <gamozo> mrvn: Of course!
20:24:00 <mrvn> sonny: sound is verry supectible to jitter. You realy don't want delays there
20:24:00 <mrvn> video is even worse
20:24:00 <sonny> yeah, I don't mean polling IO for the audio
20:25:00 <mrvn> You want 2 things: race to sleep, and fast wakeup when it's time
20:25:00 <sonny> in the video, "we'll just upgrade next year" lol
20:27:00 <geist> re: ia64 it was neat, and i remember at the time everyone was drinking the koolaid saying it was the future
20:28:00 <geist> many other architectures died as a result of companies switching to it
20:28:00 <zid> ia64 is HP's future, not yours :p
20:28:00 <gamozo> ahahah
20:28:00 <geist> alas the first few implementations had performance issues with regular code
20:28:00 <geist> and it never caught on
20:28:00 <zid> it actually... ran regular code eventually?
20:28:00 <geist> oh i ran regular code, just didn't really kick ass like it was supposed to
20:28:00 <geist> first few implementations
20:29:00 <gamozo> Shame. It was really power hungry too right?
20:29:00 <zid> maybe this is just hindsight, but how the fuck did they think it'd kick ass at regulr code
20:29:00 <geist> intel fully expected for x86 to go away and all desktops/laptops/etc would also switch to ia64
20:29:00 <geist> lots of koolaid
20:29:00 <geist> and also to be fair the first few impls just had design issues
20:29:00 <geist> huge cache latency, etc
20:30:00 <sonny> so, everyone just stayed on x86?
20:30:00 <geist> also had pretty terrible code density but there was some assumptions as to how processor tech would be developed
20:30:00 <geist> basically. AMD came out with x86-32 shortly afterwards (2002/2003) and then eventually that caught on
20:31:00 <gamozo> Yeah, that was honsetly wild. Intel was pre-occupied and AMD made the 64-bit standard for Intels arch
20:31:00 <gamozo> honsetly pretty wild
20:31:00 <geist> but really x86-64 wasn't an instant success, took quite a few years for it to really take off
20:32:00 <geist> mostly intel had to make their version, and really windows took a few years to get ported, and that made it legitimate
20:32:00 <gamozo> https://www.youtube.com/watch?v=hxSqCdT7xPY
20:32:00 <bslsk05> ​'Intel vs. AMD - What happens when you remove the fan?' by JM Lainez (00:02:18)
20:32:00 <gamozo> I remember this Intel vs AMD video from wayyyy back when
20:32:00 <gamozo> ahahah
20:32:00 <sonny> oh I forgot amd64 happened first
20:32:00 <gamozo> the chiptunes are a banger
20:34:00 <sonny> damn CPUs get hot
20:36:00 <geist> yah so the problem with *that* (the amd things overheating) was at the time i believe intel had the patent on thermal based shutdown
20:36:00 <geist> and wouldn't license it to anyone else. so it was well known that AMD cpus would just cook themselves
20:36:00 <geist> the patent eventually expired, and/or they licensed it
20:36:00 <geist> i think other vendors had the same problem (VIA, etc)
20:37:00 <mrvn> And they made these nices specs that the bios/os set the fan speed dynamically depending on temp and the hardware is not allowed to override it when the cpu gets too hot.
20:37:00 <Griwes> another example of how patents actively stifle innovation
20:37:00 <mrvn> if the CPU doesn't cook itself you are violating the specs
20:37:00 <geist> i remember at some point in the late 2000s my cpu fan fell off while running
20:38:00 <geist> like the clip broke and the fan pivoted off the cpu and fell to the side
20:38:00 <geist> system shut ddown within 10 esconds or so. it was a K10 i believe, so I was really worried it had killed itself
20:38:00 <geist> but by then i guess AMD had their thermal runaway thing implemented
20:41:00 <mrvn> Here is a nice challenge for you all: Write a program that when run with "env -i SPEED=LOW ./prog" it is slower than "env -i SPEED=FAST ./prog" and doesn't getenv("SPEED")
20:46:00 <moon-child> mrvn: stack alignment!
20:47:00 <mrvn> moon-child: obviously. now exploit it
20:47:00 <moon-child> meh i don't feel like it
20:48:00 <heat_> geist, how does thermal based shutdown get patented lol
20:49:00 <heat_> my car is on fire, i guess it must keep running
20:49:00 <mrvn> heat: patents for cars don't apply for cpus
20:49:00 <heat> same principle
20:49:00 <heat> it's a basic safety measure
20:49:00 <heat> do we patent safety?
20:50:00 <mrvn> what has that got to do with anything? We are talking patents.
20:50:00 <heat> do we patent safety?
20:50:00 <heat> how the fuck is this patenteable?
20:51:00 <mrvn> You can fight the patent claiming it's "basic". That's a valid strategy. Do you have a million bucks?
20:51:00 <heat> it shouldn't be a patent in the first place yeah?
20:51:00 <mrvn> they pay, they get a patent. That's basically how it works if a patent lawyer writes it.
20:52:00 <mrvn> If it's not patentable then you didn't obfuscate it enough.
20:57:00 <gamozo> Ahhh, patents
20:57:00 <kingoffrance> heat: you are assuming principles are involved rather than pure formalism "the judge will do this"
20:58:00 <kingoffrance> which is 180 of "law"
20:58:00 <kingoffrance> no legitimate court proclaims themself "god" it is people operating under bogus pretenses
20:58:00 <kingoffrance> i.e. short-term thinking, to long-term destruction
20:59:00 * kingoffrance crawls back to shadows
20:59:00 * kingoffrance whispers premature optimization root of all evil
21:09:00 <geist> yep. what mrvn said
21:09:00 <geist> doesn't mean it makes sense, but the idea is to narrowly scope the patent to ctually get it, but broadly scope it such that its applicable to the largest amount of stuff
21:10:00 <geist> and that's what they pay the lawyers for
21:11:00 <mrvn> and you pay the big bucks to have the patent look like it has very narrow scope but can actually defend a large scope if desired.
21:13:00 <mrvn> anyone used coz?
21:13:00 <mrvn> coz-profiler
21:21:00 <moon-child> afaik it's unmaintained
21:21:00 <moon-child> oh hmm, last commit a couple of months ago
21:22:00 <moon-child> oh no I was thinking of stabilizer
21:25:00 <mrvn> you kind of want both
21:29:00 <moon-child> I want a pony too
21:30:00 <mrvn> there is a whole OS full with them
21:33:00 * moon-child smacks klange around a bit with a large trout
21:34:00 * klange kicks moon-child from teh channel
21:35:00 <mrvn> https://www.youtube.com/watch?v=T8XeDvKqI4E
21:35:00 <bslsk05> ​'Monty Python - The Fish Slapping Dance' by ArmyTanksStudios (00:00:23)
21:35:00 <klange> Two can play classic IRC games. And one can arguably play them better when one is ops.
22:21:00 <geist> random things i found while refreshing my cache of ARM docs on arms site:
22:21:00 <geist> https://developer.arm.com/documentation/100940/0101/?lang=en is a pdf that seems to try to summarize armv8 address translation, may be a handy read for folks trying to learn
22:21:00 <bslsk05> ​developer.arm.com: Documentation – Arm Developer
22:22:00 <geist> https://developer.arm.com/documentation/100941/0101?lang=en is a pretty good overview of some of the memory model stuff and barriers and whatnot
22:22:00 <bslsk05> ​developer.arm.com: Documentation – Arm Developer
22:24:00 <geist> a few other guides there you can find under the 'guide' tag. kinda helpful maybe
22:25:00 <gamozo> nifty
23:22:00 <heat_> someone on r/osdev got their os to run on apple devices
23:25:00 <zid> woo lightning
23:25:00 <zid> not had a storm in a year
23:26:00 <klange> With an unlocked bootloader, the basics of a recent iPad are bog standard ARM.
23:26:00 <heat> oh really?
23:26:00 <klange> And the boot environment hands you a framebuffer, so showing something on screen and having a working timer are par for the course on those.
23:26:00 <heat> device tree and everything?
23:27:00 <klange> Not directly from the device, but they're using checkra1n's pongo as a bootstrap.
23:27:00 <heat> oh
23:28:00 <heat> that's substancially less impressive
23:28:00 <heat> still cool I guess
23:28:00 <klange> It's cute, though. It's like how I bought on a Surface :)
23:28:00 <klange> Sure, it technically works, but then you have a useless tablet with a clock ticking away.
23:29:00 <klange> s/bought/boot/
23:29:00 <klange> bleh, gotta wake up, I have to run a meeting in a half hour...
23:30:00 <klange> I could probably boot ToaruOS with their setup... my old iPad Mini 2 is probably a viable setup, and it's ancient and disused enough I don't care about wrecking it.
23:31:00 <heat> btw something horrific I saw: I don't think they're using the device tree
23:31:00 <klange> In fact, it's probably easier to get a useless 'hello world' boot on an iPad than it is on an M1 right now, just because of how esoteric the M1 boot environment remains, and how geared marcan's tooling is towards booting Linux?
23:31:00 <heat> they invented their own device tree
23:31:00 <heat> like, custom format in json
23:32:00 <klange> I like their dedication, though. They've got a target device profile in mind, and they're pushing forward with it.
23:33:00 <heat> huh i was expecting for M1 to use UEFI
23:33:00 <klange> Nothing all that special about opuntia, but gosh darn it reminds me of the early days of ToaruOS. Or even the current days of ToaruOS.
23:33:00 <heat> klange, where's all the python? :P
23:33:00 <klange> I said early!
23:33:00 <klange> ToaruOS was Python in the middle of its life.
23:34:00 <heat> you liked python so much you wrote your own
23:34:00 <heat> that's true dedication
23:34:00 <klange> Even 1.0 was virtually all C, it wasn't until ~1.2 that the whole Python DE and suite of fun applications existed.
23:34:00 <klange> Which is part of why the NIH project worked at all - I just reverted to the original C apps.
23:35:00 <klange> And then improved them with the functionality I had prototyped in the Python versions.
23:35:00 <heat> now you bring them back and run them on kuroko
23:35:00 <klange> I really should. Kuroko is super viable, it's got over a year of dedicated use in my editor, and it's doing nicely in benchmarks.
23:36:00 <klange> Threads are still garbage - Kuroko suffers from all the things CPython's GIL was meant to address, combined with the fact that my pthread lock implementations on Touru are garbage.
23:36:00 <klange> garbage garbage garbage
23:36:00 <klange> But none of those original Python apps were threaded - I didn't even build Python with thread support.
23:36:00 <heat> btw you should fix bim for serial ttys
23:36:00 <klange> Which is also why I haven't ported Python again to 2.0.
23:36:00 <klange> what's wrong with bim on serial ttys
23:37:00 <klange> are you setting appropriate quirks for your terminal? it's all manual config, none of that terminfo stuff
23:37:00 <heat> it didn't work right because they have 0 cols and 0 rows
23:37:00 <klange> oh you need to tell stty about the size of your terminal
23:37:00 <moon-child> what are the threading issues?
23:37:00 <heat> we went through this when I ported bim
23:37:00 <klange> I have a tool for that if your terminal supports cursor reporting
23:37:00 <klange> ToaruOS actually runs this on serial consoles on startup
23:37:00 <heat> you should like just set a failsafe size for the tty
23:38:00 <heat> like ncurses apps do
23:38:00 <klange> https://github.com/klange/toaruos/blob/master/apps/ttysize.c used to be called divine-size
23:38:00 <bslsk05> ​github.com: toaruos/ttysize.c at master · klange/toaruos · GitHub
23:38:00 <heat> (or maybe they find it out with terminal sequences, I can't tell. I think some apps like irssi correctly run)
23:39:00 <klange> as a convenience it will also just take width/height arguments if your terminal _doesn't_ support the cursor reporting callbacks
23:39:00 <klange> so you can just `ttysize 80 24` or whatever
23:42:00 <klange> (it's also useful since a serial console attached to a terminal emulator in a window has no way of reporting a change in size, so i just mash `ttysize` a bunch when reorganizing windows)
23:42:00 <klange> https://klange.dev/s/Screenshot%20from%202022-05-19%2008-41-04.png
23:42:00 <klange> my rpi has been sitting here running just fine for _nearly two months_
23:43:00 <klange> Admittedly, it hasn't been doing anything besides running an idle serial console, but I'm kinda shocked - and the clock isn't even off.
23:49:00 <heat> now make it do something
23:49:00 <heat> httpd or ircd
23:50:00 <klange> It doesn't have a NIC driver for the genet device.
23:50:00 <klange> Plus the whole 'ToaruOS still lacks listening TCP sockets because I have no idea where my priorities are and there is way too much on my plate'.
23:50:00 <heat> oh
23:51:00 <klange> At least this network stack isn't fundamentally broken in a way that prevents implementing them.
23:51:00 <klange> toaru32's was. I wrote a whole new stack for Misaka.
23:51:00 <heat> i added listening tcp socket support a few weeks back and tried to have sshd
23:52:00 <heat> totally forgot I didn't have ptys
23:54:00 <heat> i should like finish some of my pending work
23:54:00 <heat> i didn't actually fully get riscv support - didn't add IRQ support, also no thread spawning and signals
23:54:00 <klange> I think XHCI/USB is my top priority, that's what I was working on before my vacation.
23:54:00 <heat> and i have a handful of git stashes
23:55:00 <klange> And it's what this RPi was testing when it was last booted two months ago, which is why it has a serial console and no compositor.
23:55:00 <klange> It initializes the controller and can query ports, and I believe I successfully sent commands and received responses.
23:55:00 <heat> i don't grok USB yet
23:56:00 <heat> like, how tf do USB addresses work, how do you enumerate them, etc
23:56:00 <heat> funnily enough I also have a "pending" ehci driver in the tree lol
23:58:00 <klange> You ask the controller and it does the thing, apparently.
23:58:00 <klange> idk I only got that far
23:59:00 <klange> The devices I need to talk to on the rpi are behind a hub, so next step is talking to that.