Search logs:

channel logs for 2004 - 2010 are archived at ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present

Tuesday, 10 January 2023

00:00:00 <mjg> whoa there
00:00:00 <mjg> the only REAL system is templeos
00:00:00 <sortie> heat. and they all have a wonderful supported upgrade path to my big and professional Sortix operating system
00:00:00 <mjg> everything else is but a footnote
00:00:00 <mjg> sortie: if you wrote code to overwrite openbsd with sortix, i would get a vm just to do it
00:00:00 <klange> i prefer oses with living development teams
00:01:00 <klange> (too soon?)
00:01:00 <sortie> mjg, I did, just use instead of dd(1)
00:01:00 <bslsk05> ​ Sortix rw
00:01:00 <mjg> sortie: cmon dawg
00:01:00 <mjg> sortie: while running openbsd
00:01:00 <mjg> have to be live overwrite and only reboot needed
00:03:00 <sortie> mjg, hmm? Just return to single user mode or something, mount / read-only, then probably it will survive rewriting the harddisk with a Sortix image
00:03:00 <mjg> no man
00:03:00 <mjg> you just don't get it
00:03:00 * mjg is staying on mirbsd then
00:03:00 <sortie> You mean replacing OpenBSD in place with Sortix, keeping all your filesystem files?
00:04:00 <mjg> ya man you migrate the system
00:04:00 <mjg> seamlessly
00:04:00 <mjg> i grant reboot to swap the kernel
00:05:00 <klange> don't need to reboot, just load a kernel module that rewrites everything
00:05:00 <sortie> OK cus I'm sure Theo would not like me hot hackin' the kernel to become Sortix
00:05:00 <mjg> klange: giving him an easy way out
00:05:00 <sortie> I have feeling OpenBSD doesn't have dynamic modules any more
00:05:00 <mjg> sortie: it does not
00:05:00 <heat> openbsd doesn't even have tmpfs
00:05:00 <mjg> deemed insecure
00:05:00 <mjg> heat: it does, but they don't build it
00:06:00 <mjg> by default
00:06:00 <sortie> mjg, heretic yes indeed
00:06:00 <sortie> mjg, the biggest problem with that is really the filesystem type
00:06:00 <klange> that's disturbing, how does openbsd deal with... plugging things in?
00:06:00 <sortie> Sortix doesn't have ufs support, just ext2
00:06:00 <heat> buttpl
00:06:00 <klange> "guess you'll have to recompile and reboot"?
00:07:00 <sortie> Extracting a Sortix base system onto / is simple enough
00:07:00 <mjg> klange: you have all the drivers loaded
00:07:00 <klange> Though I would guess the things I have in mind don't have _any_ openbsd drivers anyway, so it probably doesn't matter
00:07:00 <sortie> Making that filesystem migrate to ext2... yeah... no... -- but implementing a ufs driver might be doable
00:07:00 <mjg> klange: you do realize it's not many :)
00:07:00 <heat> LMAO
00:07:00 <sortie> The bootloading would be tricky but maybe not so much. GRUB can probably be installed from OpenBSD
00:07:00 <klange> :headtap: don't need loadable modules if you don'
00:08:00 <klange> t have any drivers in the first polace
00:08:00 <klange> kewagjekwahgekwa
00:08:00 <heat> sortie, ah yes, a ufs driver for THE ufs
00:09:00 <sortie> The easiest is probably just to tar up the root filesystem, replace the harddisk with a minimal installation disk image, which then does mkfs and extracts the saved files
00:09:00 <mjg> wait are you the weird which has pledge
00:09:00 <mjg> or was it haiku
00:10:00 <sortie> mjg, am I allowed to send all your personal files to the central Sortix migration and surveillance server?
00:10:00 <sortie> I don't have pledge at this time. Was considering implementing it for fun
00:10:00 <heat> mjg, I think you're thinking about unveil and serenityos
00:10:00 <mjg> well of course
00:11:00 <mjg> heat: technically haiku has some pledge tho
00:11:00 <mjg> data/system/data/fortunes/Art:His performance is so wooden you want to spray him with Liquid Pledge.
00:11:00 <mjg> data/system/data/fortunes/Goedel:THIS IS PLEDGE WEEK FOR THE FORTUNE PROGRAM
00:11:00 <mjg> src/apps/mail/words:pledge/DS
00:11:00 <mjg> but otherwise fair point!
00:11:00 <sortie> niht
00:11:00 <heat> sortie, niht
00:12:00 <mjg> not invented here, theo
00:12:00 <heat> game of trees moment
00:12:00 <mjg> dude i tried using it few days ago
00:12:00 <klange> I should implement a pledge-alike. It's such a simple idea.
00:12:00 <mjg> kept complaining about a missing file in the repo
00:12:00 <mjg> i mean under .git
00:13:00 <mjg> git (not got!) fsck said 's all good man and had no issues
00:13:00 <mjg> in true openbsd fashion their got log does not roll with a pager
00:13:00 <mjg> :p
00:13:00 <heat> lol
00:14:00 <heat> klange, the problem with kernel-side pledge is that you couple yourself to a libc
00:14:00 <heat> which sux
00:15:00 <heat> i actually wanted to see if I could do libc-side pledge by dynamically creating bpf filters for seccomp
00:16:00 <klange> Well, I'm already tied to a libc, but also if your pledges are based on groups of syscalls, I would think that would be a lot less of a problem?
00:17:00 <heat> depends
00:17:00 <mjg> have you seen the realities of pledge in the kernel
00:17:00 <mjg> abhorrent
00:17:00 <heat> if your pledge("stdio exit") bans everything except write/read/writev/readv/.../exit, this works... except if you use another syscall in stdio
00:18:00 <heat> which is very possible as you may need locks and/or memory allocation
00:19:00 <heat> so now you're already creating exceptions to strict syscall<->libc groups to make sure your libc works
00:19:00 <klange> Perhaps the pledge interface could be in the kernel, but the thing that maps group names to syscall masks could be handled by the libc
00:19:00 <klange> So the libc could know that if you ask for stdio it needs lock syscalls.
00:19:00 <heat> yes
00:20:00 <heat> so you're basically getting into pledge in the libc with some sort of syscall filter
00:20:00 <heat> which was my idea, seccomp as the backend of pledge
00:21:00 <heat> seccomp for its powerful and fast nature, pledge for usability
00:21:00 * klange . o O (Static checker for pledge declarations that checks that code paths after it don't lead to forbidden system calls)
00:27:00 <vai> good morning everyone from Finland EU
00:27:00 <heat> what are you on about
00:28:00 <heat> it's night in finland
00:29:00 <mjg> ahead of his time
00:29:00 <klange> 2:30am, early bird gets the osdev worm
00:30:00 <mjg> allright mofoz
00:30:00 <mjg> my magic rust patch compiled
00:30:00 <mjg> and i verified the compiler did not lose its shit
00:37:00 <heat> rust committer confirmed?
00:44:00 <mjg> an english native speakers willing to lint this
00:44:00 <mjg> any
00:45:00 <heat> / do it more times later. <- do it again?
00:46:00 <mjg> no offence but what do native speakers think
00:46:00 <mjg> not 1st gen internet english speakers from portugal1111
00:47:00 <heat> fuk u
00:47:00 <mjg> wat
00:47:00 <heat> Error other than `ENOSYS` is not a good <-- errors other than ... are not a good
00:48:00 <mjg> probably fair but i stand by my previous remark
00:49:00 <mjg> basically not giving you the satisfaction
01:15:00 <klange> i need to get rust module builds working for toaru so i can work with lina to make a usb stack we can share with m1n1
01:16:00 <FireFly> nice
06:53:00 <geist> omg cyriak put out a new video
07:23:00 <zid> MV for a song, that actually is a really good idea
07:23:00 <zid> that's what all the japanese youtube animators do, MVs
08:15:00 <nur> geist, who is thay
08:18:00 <geist> oh cyriak has been making these completely insane nightmare fuel videos for like 16 years
08:18:00 <geist> hasn't put one out in a while
08:19:00 <geist>
08:19:00 <bslsk05> ​'cyriak - Home' - 'I'm a freelance animator based somewhere in the UK. Feel free to get in touch for professional commissions (or anything else) via twitter or my website. I've given up trying to figure out youtube's messaging system.
08:19:00 <geist> but just did a new one a few days ago
08:22:00 <nur> oh I thought it was a systems youtuber I hadn't heard of
08:23:00 <nur> anyway I think I have a lot more time for osdev again
08:23:00 <nur> that's the good news
08:23:00 <nur> the bad news is I got laid off due to "global recession"
08:23:00 <nur> :-|
09:47:00 <epony> cheers
09:48:00 <epony> orange juice for everyone and vodka for nur he's paying
09:48:00 <epony> today only
09:52:00 <pog> neat
11:25:00 <nur> :p
11:35:00 <kaichiuchi> hi
11:36:00 <kaichiuchi> this is how I picture the linux community:
11:36:00 <bslsk05> ​'Sir Nils Olav promoted to Brigadier by Norwegian King's Guard' by Edinburgh Zoo (00:03:31)
11:40:00 <epony> it's more like the "clientelle" of a nut-house (mental "health" facility) with male nurses and totally coucou doctors who are former patients
11:41:00 <epony> and everyone has a strange dialect that only they understand themselves, but try to work out the TV watching rules in the main hall after the TV exploded
11:47:00 <epony> the nurses are taking the same medication as the patients, which is big pharma researching fake opiates for the masses and use the nurses as control groups, the doctors are advance experience but can't really provide any value to the pharmacoms, so the nurses are the drop in assistants that do majority of the experiments
11:48:00 <epony> and the patients are the walk ins who just like it and do their time inside since it's more appealing to their natural predisposition for communal shared experiences of the self and group fun and games
11:49:00 <Ermine> kaichiuchi: it's funny, but I don't see how does correlate with linux community
11:49:00 <kaichiuchi> i have no rational basis for it
11:49:00 <kaichiuchi> all I know is when I see it, I immediately think linux community
11:50:00 <Ermine> Because penguin?
11:50:00 <kaichiuchi> not only because penguin
11:50:00 <kaichiuchi> but.. i don't know
11:52:00 <epony> have you seen the sterilised gorillas?
11:53:00 <epony> they look pretty sad and pathetic
11:53:00 <epony> like apple and google employees
12:37:00 <kof123> i'm ok with such characterizations, but surely ibm/ms/whoever is 300-pound gorilla
12:38:00 <kof123> this has long precedence, before other things existed
12:38:00 <kof123> no wonder they are in love lol
12:45:00 <epony> havinh gun is the best part
12:45:00 <epony> having fun
12:46:00 <epony> wrong key opened the wrong door weird unreal level found by accident
12:47:00 * epony goes warp drive
13:08:00 <fedorafan> king of fighters
13:08:00 <fedorafan> where is the wodka
13:08:00 <fedorafan> who got it
13:08:00 <fedorafan> gimme :D
13:12:00 <kof123> hi fedorafan. what are you working on/plans? such talk doesn't bother me, i am not a mod of any shape or form, but IMO focus on coding and nevermind the bollocks :D
13:13:00 <fedorafan> i see just joined this chan totday and saw talk like that im fine with that
13:13:00 <fedorafan> oh your nick doesnt refer to the game
13:14:00 <fedorafan> cool arcade
13:14:00 <kof123> i get that all the time, np. i do c89 and filesystem at the moment
13:14:00 <fedorafan> what is c89
13:14:00 <fedorafan> i dont do os development
13:14:00 <kof123> ancient c standard 1989 lol
13:14:00 <fedorafan> oh ok
13:24:00 <fedorafan> lol
14:38:00 <netbsduser> how do people tend to mix fine-grained locking of virtual memory control structures with the difficult reality that one needs to sometimes do "bottom-up" locking and sometimes "top-down" ordering?
14:39:00 <sham1> In an ad-hoc manner I'd think
14:39:00 <netbsduser> take the case of a page fault: you must lock the address space map, then the vm object, then perhaps an individual page description; while for e.g. "remap a page read-only everywhere because it is now to be CoW'd"; or page replacement "swap out a page/put a mapped file page back to disk" needs to be 'bottom-up'
14:40:00 <sham1> What do you mean by bottom-up and top-to-bottom here
14:41:00 <netbsduser> sham1: the different between "lock address space map -> lock vm object -> lock page struct" and "lock page struct -> lock its VM object -> lock all address spaces in which that VM object is mapped"
14:42:00 <netbsduser> the former ordering is what happens in a page fault; the latter is what happens when e.g. it's time to swap out a page, or to write it back to backing store (for file mmaps) and release it
14:43:00 <sham1> Why would you need to lock the address spaces for the latter?
14:45:00 <netbsduser> sham1: because the page is being released, it must be unmapped everywhere
15:19:00 <heat> netbsduser, my vm objects have multiple locks
15:21:00 <heat> actually I found a lock-order inversion in my wp code lol
15:23:00 <heat> but for instance if you want to wp a page in your vmo, you have a mapping_list_lock and a general lock
15:24:00 <heat> grab the mapping list lock, iterate through regions that map that, get its address_space, lock the page tables and wp
15:25:00 <heat> you don't need to grab a whole "mmap_sem" for that
15:25:00 <heat> this can actually be done all inside spinlocks, no need for sleepable locks even
15:26:00 <heat> i think that the gist is that if you find yourself needing to invert lock orders, your code is wrong or the locks are too big
15:27:00 <netbsduser> i fear the locks are too small, a big lock would simplify this considerably
15:28:00 <heat> what's your exact problem, in this case?
15:31:00 <heat> its possible that you're trying to do too much under the lock
15:31:00 <netbsduser> page-out. pages are organised into queues. to which queue a page belongs is protected by the page_queues_lock. when it is time to page out some pages, the page has to be locked. then every mapping of the page has to be iterated over and unmapped. the page's owning object needs to be updated to indicate that the page has been swapped out (or put back to disk in the case of a file mapped page)
15:33:00 <netbsduser> so for page-out ones goes lock page->lock its object->lock address spaces it's found in. but for page-fault handling, lock address space->lock object->lock page (so we can map it safely, otherwise it could be swapped out meantime)
15:33:00 <heat> ok, so: vm_object mappings_list_lock -> region->address_space->page_tables_lock -> mmu_unmap()
15:33:00 <heat> where page_tables_lock is a smallish spinlock you only hold when messing with page tables
15:35:00 <heat> my point is that in this case you don't need to lock the whole address space
15:36:00 <heat> your vm_region's aren't going anywhere while you hold the mappings_list_lock (as destruction implies removal, hopefully)
15:36:00 <heat> the page_table_lock ensures stability of your page tables
15:37:00 <heat> i'm not sure if you even need the page lock here
15:37:00 <netbsduser> yes, you make a good point that the address space locking doesn't necessarily need to happen
15:37:00 <heat> I assume your pages have refcounts?
15:39:00 <netbsduser> not directly, to prevent the struct vm_page from growing too large i have a separate struct paged_page for pages which are part of anonymous memory objects
15:40:00 <netbsduser> so i can avoid carrying around that data in the case of a page being used for wired memory only
15:40:00 <netbsduser> i am trying to think if there is any obstacle to having pages locked by the object (or paged_page) to which they belong
15:41:00 <heat> so it has a lock but not a refcount?
15:41:00 <heat> that seems kind of backwards IMO
15:41:00 <netbsduser> the lock logically belongs in paged_page and i think could be moved there safely; i have no idea why it's in vm_page, can't remember my rationale
15:44:00 <heat> so, I don't have page out yet but in theory I avoid pages getting swept from under me by just having a general refcount
15:44:00 <heat> can't swap if you have more than 1 ref or something
15:44:00 <heat> I'm still not sure if or how that's to be done
15:46:00 <netbsduser> what does the refcount count, number of places in which the page is mapped?
15:47:00 <heat> no, just active references
15:48:00 <heat> the vm object has one, your local struct page * variable has another, etc
15:49:00 <heat> so the page fault code does vmo_get_page, which grabs a ref and hands it back, then it maps it, then releases its ref
15:49:00 <heat> so a page that's mapped under 1 VMO on N mappings has a refcount = 1
16:14:00 <x8dcc> hello, I am trying to add a simple gdt in assembly but I can't compile the file because nasm can't find the gdt labels (gdt_data.data_descriptor and gdt_descriptor)
16:14:00 <x8dcc>
16:14:00 <bslsk05> ​ <no title>
16:15:00 <heat> . doesn't really work like that does it
16:15:00 <heat> try simply referring to .code_descriptor and .data_decriptor
16:15:00 <heat> (also typo in your code, *descriptor)
16:16:00 <x8dcc> oh, true
16:17:00 <x8dcc> you mean refering to .code_descriptor when declaring "CODE_SEG" and "DATA_SEG"?
16:17:00 <x8dcc> because that gives the same error. Same with gdt_start
16:17:00 <heat> yes, i mean that
16:18:00 <heat> /shrug
16:18:00 <heat> I don't use NASM
16:18:00 <x8dcc> my guess is that I have to make them global or something like that, but I am not sure
16:18:00 <pog> hello
16:18:00 <heat> no, no need for global there
16:18:00 <heat> wait
16:19:00 <heat> where's CODE_SEG defined?
16:19:00 <heat> this is all the same file?
16:19:00 <x8dcc> yes
16:19:00 <x8dcc> the paste I sent is the same file
16:19:00 <x8dcc> I just removed some comments from the middle to make it shorter lol
16:20:00 <heat> no idea then
16:20:00 <heat> you shouldn't need global
16:20:00 <heat> hopefully someone familiar with NASM can chime in
16:20:00 <x8dcc> yeah, I didn't want to try stuff I didn't understand until my code became a mess ^^'
16:21:00 <heat> CODE_SEG equ 0x8 DATA_SEG equ 0x10
16:21:00 <heat> if you want to be boring :))
16:21:00 <pog> yeah nasm doesn't really have sublabels that work like that iirc
16:22:00 * pog labels heat
16:22:00 <x8dcc> but what about gdt_start? that shouldn't fail, right?
16:23:00 <x8dcc> uhm, nevermind
16:23:00 <heat> i don't see why it would
16:23:00 <x8dcc> it was a typo...
16:23:00 <x8dcc> (I honestly didn't know how I fixed the gdt_start part, but the other one was another typo)
16:24:00 <x8dcc> now it compiles and runs, so I *guess* my gdt is loaded now. is there an easy way to check it?
16:24:00 <heat> what emu are you using?
16:25:00 <x8dcc> qemu
16:25:00 <heat> info registers should have a GDTR reg
16:25:00 <heat> s/GDTR/GDT/
16:25:00 <heat> which will hold the address and limit
16:25:00 <x8dcc> not sure what you mean by info registers
16:26:00 <heat> on the qemu monitor
16:26:00 <heat> ctrl-alt-2 probably
16:27:00 <heat> you can also pass -monitor stdio to the qemu cmdline for easier access
16:27:00 <x8dcc> yeah, I am having a look now, I didn't know this was a thing
16:27:00 <x8dcc> very useful, thanks
16:27:00 <heat> np
16:28:00 <pog> will nasm do that absolute far jump right?
16:28:00 <x8dcc> looks like it's loaded, indeed
16:29:00 <pog> ok then
16:29:00 <pog> don't worry about me i'm very stupid
16:29:00 * pog meow
16:29:00 <x8dcc> :)
16:30:00 <x8dcc> thank you for your help, heat
16:30:00 <heat> no problem
16:31:00 <x8dcc> and thank you for having a look too, pog
16:35:00 <kaichiuchi> pog: may i pet you
16:35:00 <pog> heat how stupid am i on a scale of 1-10
16:35:00 <pog> kaichiuchi yes
16:35:00 * kaichiuchi pets pog
16:35:00 <heat> pog, pog
16:37:00 <pog> heat heat
16:50:00 <heat> kaichiuchi
16:50:00 <kaichiuchi> heat:
16:51:00 <heat> xylophone
16:52:00 <kaichiuchi> spongebob
16:55:00 <gog> mew
16:56:00 <sham1> wem
16:56:00 <heat> gog, is gog a transformation of ada?
16:57:00 <heat> doesn't seem to be a trivial caesar cipher
16:58:00 <kaichiuchi> work is easy today
16:58:00 <kaichiuchi> considering that perforce is shot, and jira, and fisheye, and and and
17:00:00 <kaichiuchi> i still can’t believe people use atlassian products
17:00:00 <heat> sounds like you want BITBUCKET
17:02:00 <kaichiuchi> no please stop i thought we were friends
17:02:00 <sham1> You must become Atlassan
17:03:00 <kof123> without commenting on anything in particular, "vulture's gotta eat too" -- some clint eastwood movie
17:04:00 <kaichiuchi> freeRTOS is actually really cool
17:04:00 <heat> i like git and I like gerrit and I likeish bugzilla
17:04:00 <heat> and github is tolerable
17:04:00 <heat> the rest sux ballz
17:05:00 <kaichiuchi> i’m really surprised we didn’t do something crazy like buy QNX
17:07:00 <kaichiuchi> you know what’s scary?
17:08:00 <gog> heat: no i was using gog before i was called ada
17:08:00 <kaichiuchi> i use adaptive cruise control, which adjusts acceleration/braking based on the speed set point, following distance, and the car in front of me
17:08:00 <kaichiuchi> the system has crashed 4 times since i bought this car brand new.
17:09:00 <gog> kaichiuchi: i use jira and bitbucket every day and it's fine i guess
17:09:00 <gog> idk
17:09:00 <gog> i'd prefer something less fiddly
17:09:00 <gog> too many moving parts that are irrelevant for our use cases
17:09:00 <gog> anyhow i'm going home see yall in a bit
17:10:00 <kaichiuchi> the fact that any software in a car crashes at all is distressing
17:11:00 <kaichiuchi> the whole point of why i bought that car is for its high safety ratings
17:11:00 <heat> isn't that supposed to be super rare
17:12:00 <heat> you should complain
17:12:00 <heat> are you using a tesla
17:12:00 <kaichiuchi> SUPPOSED to be, yes
17:12:00 <heat> that sounds like a tesla
17:12:00 <kaichiuchi> no a subaru
17:12:00 <kof123> hmm, i used to like projecting speed/etc. on the windshield, which is kind of gimmicky, but means not having to look down
17:13:00 <kaichiuchi> point being, what if the system decides to panic and the brakes won’t disengage because they’re being stuck into an engagement command
17:13:00 <kof123> yes yes, feel the power. you have come over to the unix sysadmin side. terse commands, if you ran as root it is your own problem
17:14:00 <kof123> do as i say, don't ask questions!
17:14:00 <kaichiuchi> and I think to myself if car people can’t get safety features right
17:14:00 <kaichiuchi> how safe are we
17:15:00 <kaichiuchi> for an abnormally long time the nuclear launch codes were all zeroes
17:15:00 <heat> isn't the car industry super regulated when it comes to that?
17:15:00 <kaichiuchi> i would hope it is
17:16:00 <kaichiuchi> crashing software in a car, plane, navigation system, etc etc is unacceptable
17:16:00 <kaichiuchi> probably even your phone
17:16:00 <kaichiuchi> your phone is an emergency device these data
17:16:00 <kaichiuchi> days
17:19:00 <kaichiuchi> i’ve had the safety systems (eyesight) entirely cut out on me until I shut the car off and turned it back on
17:20:00 <kaichiuchi> i plug in my phone and if google maps and Music are active it lags out, becomes completely unresponsive, then reboots as I’m fucking driving
17:21:00 <Bitweasil> Oh man, that PAL set to zeros story just keeps popping up. :D
17:22:00 <kaichiuchi> i don’t know how our species has billions of members
17:23:00 <kaichiuchi> we accidentally dropped a nuke in north carolina and the only thing that kept it from detonating was the shittiest low voltage switch
17:23:00 <heat> should've nuked it anyway
17:24:00 <kaichiuchi> we’ve come close to blowing ourselves up more than once
17:24:00 * kof123 holds up picture of mushroom cloud labelled "bazinga"
17:24:00 <heat> i am aware
17:24:00 <kaichiuchi> and people shrug their shoulders and say “shit happens”
17:24:00 <heat> kof123, dropping a nuke on yourself is a classic zinger
17:24:00 <kaichiuchi> also the only reason i’m going on this rant is because my car just did the thing i’m complaining about
17:25:00 <kaichiuchi> and it’s osdev related because they’re probably using QNX/INTEGRITY
17:25:00 <kaichiuchi> so sue me
17:38:00 <geist> side note i did an upgrade of my subaru's entertainment software and disassembled the update, and it's QNX all the way down
17:38:00 <geist> thought it was kinda curious
17:38:00 <geist> wasn't even encrypted, just signed
17:38:00 <geist> basically some sort of zip file with a few file system images in it, etc
17:42:00 <kaichiuchi> did it ever crash for you?
17:42:00 <kaichiuchi> because, i bought the car with 1 mile on it
18:00:00 <netbsduser> heat: i put my problem to an NT expert
18:01:00 <geist> the entertainment part of the subaru? not really. it's refused to pair with my bluetooth phone once or twice, but i realized if you just hold in the volume button for 10 seconds it reboots
18:01:00 <geist> but of course that's not the engine computer, so it just reboots the entertainment bits
18:02:00 <netbsduser> he counselled me that my locking scheme is dubious, with some locks too big and some too small, and explained briefly the NT approach, which is to have a single spinlock (and extremely simple structures in the VMM so you don't waste time when you hold it)
18:04:00 <netbsduser> when you need to page-in or page-out, he advises to set a bit in the page struct explaining that the page is in transition, then if you run into an in-transition page during a fault, you wait on an event associated with the page, and then re-process the fault (in case anything changed globally)
18:05:00 <geist> tat's a strategy
18:06:00 <netbsduser> i am going to try it
18:08:00 <netbsduser> my vmm is modelled after NetBSD's UVM, which was the subject of a PhD thesis. so i think for my first attempt at a proper VMM with swapping i can allow myself a bit of leniency
18:10:00 <mjg> it was a thesis 2 decades ago
18:10:00 <mjg> today it is a bad idea
18:10:00 <mjg> most likely anyway
18:10:00 <netbsduser> i should be glad to hear why
18:10:00 <mjg> i can't stress enough it all happened prior to multicore being a thing
18:11:00 <mjg> so if it happens to work out with that setting, which i know it does not in netbsd, it would be a pure accident
18:11:00 <mjg> or to put it differently, there is way more ram and cpu cores to deal with
18:11:00 <mjg> i would not expect any ideas which were sensible in that period to be sensible today
18:12:00 <mjg> apart from aforementioned accident
18:12:00 <netbsduser> but it was expressly designed for multicore
18:12:00 <mjg> where
18:12:00 <netbsduser> the Mach VMM which was its antecedent and inspiration was too
18:12:00 <mjg> i know it does not perform for shit, lemme find it
18:12:00 <mjg> the mach vm is what landed in freebsd and it did not perform either
18:12:00 <mjg> i'm pretty sure the paper talks about it
18:13:00 <netbsduser> cranor's thesis discusses the fine-grained locking which was intended for SMP viability
18:13:00 <mjg> oh there it is
18:13:00 <bslsk05> ​ 'performance issues during -j 40 kernel' - MARC
18:13:00 <mjg> you do understand fine-grainedl ocking does not scale?
18:13:00 <mjg> past a handful of cores
18:13:00 <mjg> what you need is lockless operation aided with some form of safe memory reclamation mechanism
18:13:00 <mjg> rcu being an example
18:14:00 <mjg> as in, you fine grained lock something, except if it is used by numerous cores at the same time, you have a de facto global lock there
18:17:00 <geist> i kinda didn't want to go there, because i sense that netbsduser is not ready for that sort of thing
18:19:00 <mjg> to expand on this point, the vm in illumos and freebsd both suffers greatly from "fine-grained" locking
18:20:00 <mjg> and of course in netbsd
18:20:00 <mjg> interestingly this is largely true even for linux, albeit to lesser extent
18:20:00 <geist> fuchsia too, i should add
18:21:00 <mjg> linux protects the address space with a rw lock, taken for writing for mmap/munmap
18:21:00 <mjg> but fauls are handled with a read lock
18:21:00 <kaichiuchi> i think today i’m going to start osdev
18:21:00 <mjg> the end rseult is that this performs better than the previously mentioned systems, but remains a bottlenek
18:21:00 <geist> yah basically same as fuchsia. we have a rw lock for theoverall aspace, and then each object has it's own mutex, etc
18:21:00 <mjg> i'm very surprised frankly they did not sort it out yet
18:21:00 <geist> the object locks are a big source of contention
18:22:00 <mjg> are you using 'object' here in the same sense mach vm is?
18:22:00 <mjg> kind of context dependent :)
18:22:00 <geist> probably? ie, a vnode, a file, or a container of pages
18:22:00 <mjg> do you have object chains to handle cow et al?
18:22:00 <geist> yes, which is also a source of contention
18:22:00 <mjg> yep
18:23:00 <mjg> that's the mach vm stuff right there, also a problem on freebsd
18:23:00 <geist> works well enough for v1, but ends up being kinda a problem in the long run, etc
18:23:00 <mjg> i have a wip patch for it :p
18:23:00 <geist> yep
18:23:00 <geist> trouble is we are also somewhat more flexible than posix which in that case hurts the overall design, since there are a few optimization paths we can't go down
18:23:00 <mjg> it requires memory safety, which if i recall correctly you guys don't provide on purospe
18:23:00 <mjg> purpose
18:23:00 <geist> hmm? what do you mean by memory safety?
18:23:00 <mjg> rcu-like
18:24:00 <geist> oh just haven't invested in lockless stuff
18:24:00 <geist> that's part of the whole v2 lets rethink some of this bits
18:24:00 <geist> baby steps, as usual
18:25:00 <mjg> the fundamental observation is that with rcu-like you can safely traverse the obj chain and validate correctness of said traversal
18:25:00 <mjg> without locking squat
18:25:00 * geist nods
18:25:00 <mjg> well apart from the final page or whatever you were looking for
18:26:00 <mjg> at least freebsd, while there is contention in the main vm obj of course, there is way more on the backing one
18:26:00 <mjg> which will be libc, loader and other crappers
18:26:00 <geist> exactly. libc is the bad one
18:26:00 <geist> we get a gnarly chain on it in particular
18:27:00 <mjg> i had an idea to implement a mechanism which lets you express what typically happens with these
18:28:00 <mjg> so you can do it upfront
18:28:00 <mjg> and hopefully avoid overhead later
18:28:00 <geist> yeah we have some transforms we were thinking about doing on that whole obj chain. have some things thinking of
18:28:00 <mjg> fucking computers man
18:28:00 <geist> yah totally
18:28:00 <mjg> i don't remember if i asked, do you have flamegraphs from a real workload?
18:28:00 <mjg> running on fuchsia
18:29:00 <geist> i have a crazy idea in the back of my head where you invert the whole thing and just have objects refer to page runs directly, a-la ZFS or BTRFS
18:29:00 <geist> no obj chains, just simply COW the pages directly
18:29:00 <mjg> i think that's the gist of what netbsd is doing?
18:29:00 <mjg> i don't remember exactly what they did, but i do rmeember they whacked obj chains
18:29:00 <geist> possibly. also i know that linux kinda does this, but they do it because they're posix and most pages are not shared, etc
18:30:00 <geist> that's the optimization route we can't easily take, because the fuchsia VM design is somewhat more flexible
18:30:00 <geist> and makes no real distinection between shared and not shared vm objects
18:30:00 <geist> which means you dont get an easy optimizatino path
18:30:00 <geist> ie, a more classic mach/pmap design
18:31:00 <netbsduser> netbsd returns to the sunos approach where you replace object chains with reference-counting of pages
18:31:00 <geist> yeah, i think that's the transformation we are kinda thinking about for v2
18:32:00 <geist> thinking about how to ref count stuff, and more importantly if/when you have to ref count 'holes' in vm objects which is kinda expensive, etc
18:33:00 <geist> well at least i think a v1.5 we've been mulling is to keep the chain but have some ref counting to short circuit walking up and down the chain
18:33:00 <geist> but i haven't thought about it in a while, i'm not directly working on it so the details are a bit hazy
18:33:00 <mjg> well the crux of my idea is to not ref any singular pages
18:34:00 <mjg> but what is a typically used collection of pages
18:34:00 <mjg> from the backing obj
18:34:00 <geist> yah i was thinking of something kinda like that: some sort of huge btree of page runs that things have refs on. probably would be a nightmare to pull off though
18:34:00 <geist> and no guarantees of speed
18:34:00 <mjg> say you could blindly sort out any faults cming from libc for "normal" programs
18:35:00 <mjg> ye this requires a lot of experimentation
18:35:00 <mjg> 's basically the most time consuming thing of the entire ordeal
18:35:00 <geist> yah
18:35:00 <geist> re flamegraphs not exactly. i dont think we have things instrumented that finely
18:36:00 <mjg> feed for flamegraphs is trivial to generate
18:36:00 <geist> what we do have is fairly fine grained logging of events, so we can get a pretty nice systemwide trace of all the threads in and out of the kernel
18:36:00 <mjg> onyx got it :p
18:36:00 <geist> so it's sort of a simiar thing
18:36:00 <geist> can see where the time is going
18:36:00 <mjg> i'll be happy with whatever profile of a workload you can share
18:36:00 <geist> just not at the funcgion level or whatnot
18:37:00 <geist> oh i dunno, i dont have that handy, as is usual with fuchsia it's complicated as fucking hell
18:37:00 <mjg> worst caes it wont be comprehensible without knowledge of fuchsia
18:37:00 <geist> we have a problem with generating the most complicated fucking tools on the planet, because google
18:37:00 <mjg> welp it would just a mild curiosity
18:37:00 <mjg> heh
18:37:00 <mjg> ye i heard about that being a thing in the past
18:37:00 <geist> more like ou need to check out a 50GB tree, build it, and only once you've built it can the tools make heads or tals of it, because the trace is highlyc ompressed and needs the built tree, etc
18:37:00 <mjg> i guess cultire is hard to change, innit
18:38:00 <geist> the trace is nice, but it's one of those has to go with some sort of db that comes out of the build
18:38:00 <mjg> well the guerilla flamegraph support is easily few hundreds lines of simple code
18:38:00 <mjg> one can hack it in an evening
18:38:00 <geist> honestly all you'll tell me is 'yes this here is a disater, there is a disatser' and we're well aware of it
18:38:00 <mjg> benefit:effort ratio is galactic on this one
18:38:00 <geist> it's kinda depressing to be honest, i really dont want someone else to tell me the same thing
18:39:00 <mjg> :))
18:39:00 <mjg> look i was just curious what's happening, that things will be slow was already explained
18:39:00 <geist> but there are a few people that are working on it. there are a few low hanging fruits that dominate everything, so you can't get to the next level
18:39:00 <mjg> i'm pretty sure you menionted you are rolling with making it work
18:39:00 <geist> sure
18:39:00 <mjg> perf will be a concern at some point down the road which makes sense
18:39:00 <geist> it's a concern now. we're at 1.0, works well enough for current things
18:40:00 <mjg> i mean as long as the current work does not go directly against that effort ;)
18:40:00 <geist> and now we're starting to get much more concerned about scalability and the next level of perf, etc
18:40:00 <mjg> well then the first thing i would do is real adaptive spinning in mutexes
18:40:00 <mjg> afair you just roll with some spinning and time out
18:40:00 <geist> yeah we have that
18:40:00 <mjg> i guarantee this is a big perf loss
18:40:00 <geist> that helped a lot
18:40:00 <mjg> you do *adaptive* spinning now?
18:41:00 <mjg> that is, you detect lock owner is off cpu?
18:41:00 <geist> we have shit spinlocks, and there's still a single scheduler lock, which is the major bottleneck
18:41:00 <geist> yep
18:41:00 <mjg> openbsd-vibes
18:41:00 <mjg> lemme check that real quick
18:41:00 <geist> again i didn't want to mention because you were just going to rag on it
18:41:00 <mjg> fine
18:41:00 <geist> yes yes i know it's bad. i honestly feel bad about it every day
18:41:00 <mjg> i wont say squat
18:41:00 <mjg> friendly suggestion was above :)
18:41:00 <geist> because that's my doing. missed the opportunity to update before things got big, and now it's harder to break the lock apart
18:42:00 <geist> but someone is on it, they have a design, working through it now
18:42:00 <mjg> with scheduling i'm honestly more conerned about big.LITTLE
18:42:00 <geist> but not a day goes by that i dont kick myself for having not at least done a trivial breakup early on
18:42:00 <mjg> fine-grained locking in the scheduler is ultimately some work and one will get there
18:42:00 <geist> but that was when we were just trying to get it working before getting cancelled
18:42:00 <geist> yah but it's a big bottleneck Right Now
18:43:00 <geist> we're seeing it, and it tends to dominate everything contention wise
18:43:00 <raggi> geist: you're too hard on yourself, we shipped
18:43:00 <mjg> so that's a spinlock?
18:43:00 <geist> yes
18:43:00 <mjg> this probably can be damage-controlloed until fixed
18:43:00 <geist> yes. that's what we've been doing for the last few years
18:43:00 <mjg> 1. perhaps the spinlock code behaves worse than it can under contentin
18:44:00 <geist> yes yes yes
18:44:00 <geist> i know.
18:44:00 <mjg> 2. it is plausible there ar pessimal relock trips
18:44:00 <mjg> which you can whack altogether
18:44:00 <mjg> as in something locks and unlocks the scheduler sveral times while doing something common
18:44:00 <mjg> ok i'm getting off your back
18:45:00 <mjg> i need to find some real time(tm) to finish my patchez
18:45:00 <mjg> got several "survives benchmarking" projects which need finishing up, but who wants to do that
18:45:00 <geist> and yeah one of the other paths is to finally rewrite the spinlock. when that day comes i do actually have some good questions for you
18:45:00 <geist> ie, ticket vs the other spinlock (i forget the name)
18:45:00 <mjg> mcs
18:45:00 <geist> and where they start to scale differently
18:45:00 <mjg> clh
18:45:00 <geist> mcs yeah
18:45:00 <mjg> clh might be a better choice if you don't do numa
18:45:00 <mjg> it has cheaper unlock
18:45:00 <geist> right now we have the fairly simple spin with halt/wfi one
18:46:00 <geist> right now the kernel only actually has a handful of spinlocks. like in the 10 category, though as we break up the scheduler lock there will be a proliferation of them, since probably each wait queue will get one, etc
18:47:00 <mjg> do you have something for deadlock detection?
18:47:00 <geist> (remember it's a ukernel so there aren't really any drivers or large subsystems that have intricate dat structures)
18:47:00 <geist> yes
18:47:00 <geist> simple thing: the spnlock writes the cpu number that holds it, etc
18:47:00 <mjg> solaris doe snot :p
18:48:00 <mjg> i mean do you detect lock orderingi ssues between different spinlocks
18:48:00 <geist> and we have a lockup detector where each cpu periodically cross checks the other ones that they've made forward progress
18:48:00 <mjg> say foo -> bar -> baz
18:48:00 <geist> works fairly well for finding deadlocks
18:48:00 <mjg> and someone tries to bar -> foo
18:48:00 <Ermine> kaichiuchi: lol, now I'm having weird issues with pasting in vim+foot
18:48:00 <mjg> i see
18:48:00 <geist> an no, but again we have like literally 10 spinlocks. can actually manually prove that they dont nest
18:48:00 <mjg> mutexes?
18:48:00 <kaichiuchi> Ermine: :D
18:48:00 <geist> mutexes do though
18:48:00 <geist> also have a debug lock ordering detector, which is pretty neat
18:48:00 <geist> it can determine *potential* ordering issues
18:49:00 <mjg> that is what i asked about
18:49:00 <geist> yah have that, just not on spinlocks
18:49:00 <mjg> so arm64 is your primary dev platform?
18:49:00 <geist> x86-64 too
18:50:00 <geist> they're equal
18:50:00 <mjg> but you ship only on arm?
18:50:00 <nur> I misspelled the company name that I tried to apply to
18:50:00 <geist> no comment
18:50:00 <mjg> curernt products
18:50:00 <nur> well scratch that job
18:50:00 <mjg> :)
18:50:00 <mjg> ok
18:50:00 <geist> they're equal priority, lemme just leave it at that
18:50:00 <mjg> not digging
18:50:00 <geist> in other words, even if we ship more on this or that, it's unacceptable to leave one behind
18:51:00 <geist> except maybe some feature like PCID or whatnot, if it's purely optimization
18:51:00 <mjg> i'll only say the most pragmatic choice for spinlocks is a fair lock
18:51:00 <geist> yeah fairness is the big problem, especially with big.LITTLE
18:51:00 <mjg> it will leave some perf on the table in terms of througput, but will also limit tail end latency
18:51:00 <geist> we have actually observed that big cores with a pure spinning lock wins like 90% of the time
18:52:00 <mjg> i have ot tested big.little on that front but i can easily believe this is true
18:52:00 <geist> yah it's a double whammy: we have too much contention on too few locks, and then you get double penalized because the big cores barge in and take the time
18:52:00 <mjg> oh wait on that fucking arm the little cores don't do cas, do they
18:52:00 <mjg> just ll/sc
18:53:00 <geist> correct, though newer cores do cas, but we're still shipping and will probably continue to ship on v8.0 (ll/sc) cores
18:53:00 <mjg> and they still win most of the time without employing cas?
18:53:00 <geist> the cas stuff (v8.1) is literally under a feature called 'large system something' LSE i think
18:53:00 <mjg> if so that's pretty bad
18:53:00 <mjg> ye
18:54:00 <mjg> large systems extension or so
18:54:00 <geist> well it's because arm has a mwait/monitor like thing (WFE/SEV) that you use to park the cores when they are looping
18:54:00 <geist> nice, but i think the big cores wake up and get back to spinning faster
18:54:00 <mjg> one potential solution is to add a bit that little cores are trying
18:54:00 <mjg> may or may not be good enough
18:55:00 <mjg> anyhow you should be able to find someone at G who is versed in the area
18:55:00 <geist> yah, honestly i haven't fully grokked how mcu/etc deals with WFE, since the ll/sc model works nicely for WFE
18:55:00 <geist> but cas may not interlock properly
18:55:00 <geist> surprisingly few, lots of folks at G are *very* versed in x86. godlike status, etc
18:55:00 <geist> but arm is much less so
18:56:00 <geist> as you can probably imagine
18:56:00 <mjg> well i'm not surprised most people are amd64-only or similar
18:56:00 <mjg> i do note there are crazies for ppc, arm or what have you
18:56:00 <mjg> and i would be quite surprised if you did not have any
18:56:00 <geist> but i'm not saying i or others can't figure it out, it's more like i haven't sat down to take that task on and figure it out
18:56:00 <mjg> ye ye
18:57:00 <geist> i tend to try to take on a few tasks at a time, and try to deeply grok it before implementing
18:57:00 <mjg> i guess one could literally ask arm
18:57:00 <geist> so i know it's on the table somewhere but haven't sat down to really do it yet
18:57:00 <mjg> or some other vendor
18:57:00 <geist> yep, we have arm experts around
18:57:00 <geist> just less of them, and again i havne't sat down to really solve it.
18:57:00 <geist> only so many tasks and hours in the day
18:58:00 <mjg> well i can say that spinning behavior massively influences perf
18:58:00 <mjg> you can have easily a factor of 3-4 of diff even at a small scale
18:58:00 <mjg> compared to a naive spinlock
18:58:00 <mjg> even while still leaving a lot on the table due to contention being there to begin with
18:59:00 <geist> yah though like i said the arm spin at least is pretty nice, since it doesn't actually spin
19:00:00 <geist> basically a pretty fine grained monitor that's cheap to break
19:05:00 <mjg> but what happens when you wake up? you try the atomic again?
19:06:00 <geist> sure. but i mean it doesn't spin any more than it has to
19:07:00 <geist> basically if it works right the spinlock only spins on any cpu as much as someone releases it (since that breaks the WFE on all the other cores)
19:08:00 <froggey> mjg: that thing about linux protecting the address space with a rw lock is funny. exactly the same solution I came up with
19:09:00 <geist> basically with ll/sc there's an intrinsic monitor the cpu puts on that particular cache line, and if you WFE aterwards (after detecting that the lock is held), it waits for that monitor to break by some other cpu storing to it
19:09:00 <geist> a nice elegant solution
19:10:00 <geist> what i haven't grokked is how that works if you're using the cas style atomics on arm
19:10:00 <geist> since you do that in one instruction
19:10:00 <mjg> key is that if they wake up at roughly the same time
19:10:00 <froggey> I feel like ll/sc looks a lot better on paper than in practice
19:10:00 <mjg> the repeated attempt bounces the cacheline more
19:10:00 <geist> oh 100% and then the big cores tend to win
19:10:00 <mjg> right
19:10:00 <geist> faster cache/cpu/etc
19:11:00 <mjg> look you can probably get an armista to make this into a ticket lock in few hours
19:11:00 <geist> no doy
19:11:00 <mjg> and this will perform dratsically better
19:11:00 <geist> possibly
19:11:00 <geist> i'd like to look at it this year indeed
19:11:00 <geist> i just have 5 other more important things to do right now
19:12:00 <geist> including not blowing half the day on irc! (but this is actually a good conversation, so i'll bill it to the company)
19:12:00 <mjg> note as "meeting" in the timesheet!
19:12:00 <geist> heh exactly
19:12:00 <geist> reminds me i need to go reboot my ARM workstation box
19:12:00 <geist> it sounds like a vacuum cleaner when you start it
19:13:00 <mjg> sun vibes
19:13:00 <geist> totes
19:13:00 <geist> it's basically a 4U longass server box standing on end with some screwed on feet
19:13:00 <geist> when it first boots up all the fans go 100% *backwards* until the firmware tells them to stop and spin properly
19:14:00 <mjg> ll
19:14:00 <mjg> i mean lol, not link load
19:14:00 <ddevault> single stepping my interrupt handler assembly in gdb
19:14:00 <ddevault> I have seen this place before
19:15:00 <mjg> ddevault: you reminded me of a funny quote from early 00s or so
19:15:00 <mjg> ddevault: kernel debugger manual suggested you print it, becaues it is hard to read it while single-stepping the kernel
19:18:00 <ddevault> hah
19:19:00 <jafarlihi> Hey. What's the best and cheapest way of learning what the currently utilized disk I/O speeds are in Linux? I know you can cat /sys/block/sda/stat but it has count for read/written sectors, so you have to keep past state to know what the current speed is. Is there a way of getting raw speed without persisting past state to compare to?
19:21:00 <netbsduser> mjg: omnipresent smartphones have really simplified things for the "only have one computer" case
19:22:00 <jafarlihi> I'd use my smartphone for everything if you could root it and it'd work like normal linux and you still could get updates
19:22:00 <jafarlihi> otherwise you just use it and pray it's not hacked cuz you can't audit shit
19:25:00 <mjg> netbsduser: btw one funny thing i wanted to look at is the benchmark mentioned in the uvm paper
19:25:00 <mjg> netbsduser: but i don't have era-appropriate hardware and i don't know if a virtual machine will be good enough to test
19:26:00 <mjg> i don't remembert what it was exactly, but according to the results in the paper freebsd sucked horribly at the workload
19:26:00 <mjg> while netbsd+uvm did not
19:30:00 <geist> oh random thing re: weird x86 topology
19:30:00 <geist> was watching some reviews of the upcoming ryzen 7x003D chips and apparently they're doing something kinda odd
19:31:00 <Bitweasil> The "half the chip gets more L3 than the other half" thing?
19:31:00 <geist> basically they're both giving you a high boost mhz *and* 3x the L3 cache. this is strange, normally stacking cache is ..
19:31:00 <geist> yep, as Bitweasil says
19:31:00 <Bitweasil> Yeah, that's going to be a fun pain in the rear to benchmark and schedule for
19:31:00 <geist> so the idea is that you have two dies of 8 cores (as ryzens are) but only one of them has a butt-ton of L3 but has a lower cap freq
19:31:00 <Bitweasil> I don't think schedulers are "Task L3 use" aware.
19:31:00 <geist> yah no
19:32:00 <Bitweasil> More and more, proper use of 'taskset -c' seems required. :/
19:32:00 <mjg> huh
19:32:00 <geist> and in windows world i'm sure win11 is basically mandatory, since i'm sure win 10 wont get scheduling logic for it
19:32:00 <mjg> is this the year of fucking with scheduler authors?
19:32:00 <geist> pretty much
19:33:00 <geist> i'm guessing in cpuid this just all shows up as unified L3
19:34:00 <geist> since in the past they've never treated it like a split L3 since it's shared
19:34:00 <geist> or humm, wait didn't the L3 go on the io die in the zen 3 era? i have to refresh my brain with all of this i guess
19:34:00 <Bitweasil> Didn't some of the Xeons have a split L3 capability?
19:34:00 <Bitweasil> Where you could segment off sections of it for various tasks?
19:35:00 <Bitweasil> I don't know if that ever got reported out, though.
19:35:00 <ddevault>
19:35:00 <geist> yeah interesting, the L3 on Zen 4 is on the io die, so does that mean with these 7x003d chips there's a second bank of L3 sitting on top of one of the chiplets?
19:36:00 <geist> is it 'closer' to the cores on that die?
19:36:00 <mjg> ddevault: hello from irc!
19:36:00 <geist> ddevault: yay!
19:36:00 <mjg> fuck this, from now on i'm only doing risc-v
19:36:00 <geist> do it!
19:37:00 <mjg> still too mainstream
19:37:00 <mjg> gonna port fuchsia to ultrasparc
19:38:00 <Bitweasil> RISC-V now seems worth paying attention to, for sure.
19:39:00 <mjg> dude this is #osdev
19:39:00 <mjg> you are already in trashcan area
19:40:00 <mjg> - do you know jquery?
19:40:00 <mjg> - no, but i know how to arm interrupts on x86
19:40:00 <ddevault> real hardware
19:40:00 <mjg> - i don't think that's the skillset we are looking for
19:40:00 <ddevault> womp womp
19:40:00 <ddevault> err
19:40:00 <ddevault> real hardware
19:40:00 <ddevault> womp womp
19:40:00 <mjg> still looks like a screenshot mate
19:41:00 <mjg> also if you did not brick your bare metal box, did you even boot your os?
19:41:00 <ddevault> good question
19:44:00 <ddevault> the fuck is writing to 0x7f0000
19:44:00 <Ermine> hello helios!
19:45:00 <gog> womp womp
19:45:00 <gog> what's a FAR
19:45:00 <gog> explain aarch64 to me now
19:45:00 <Ermine> ESC (setup) ... is from u-boot?
19:45:00 <ddevault> no, edk2
19:46:00 <Ermine> ah
19:46:00 <ddevault> FAR is the fault address
19:46:00 <ddevault> ...fault address register
19:47:00 <ddevault> I should figure out how to unwind the stack
19:47:00 <geist> note on arm64 it's super cheap to build with frame pointer, and then you can unwind with r29 (fp)
19:47:00 <ddevault> yeah I have frame pointers
19:47:00 <Bitweasil> gog, when you take a memory fault on ARM, the FAR (and FSR) report "what happened where."
19:47:00 <Bitweasil> Interrupts on ARM don't build an interrupt frame like they do on x86, so the information as to why you're there is typically stored elsewhere.
19:47:00 <gog> so FAR is CR2
19:47:00 <gog> got it
19:47:00 <Bitweasil> System control registers, mostly.
19:48:00 <ddevault> tbh it's pretty nice
19:48:00 <ddevault> though I don't like that syscalls are mixed in with other kinds of interrupts
19:48:00 <gog> CR2 is a more clear and obvious name imo
19:48:00 <gog> (haha joke)
19:48:00 <Bitweasil> :p
19:48:00 <geist> yah the ESR is a beast to dcode, but pretty nice that it's there
19:48:00 <ddevault>
19:48:00 <bslsk05> ​ AArch64 ESR decoder
19:48:00 <ddevault> ez
19:49:00 <geist> syscalls are not *too* bad to test for, since you can just test for a particular bit pattern in the top of the ESR and then fork off in asm to the syscall path
19:49:00 <geist> it's like a 2 or 3 instruction sequence
19:49:00 <Bitweasil> I like how Intel drops some, but not all, vowels when they're naming things.
19:49:00 <ddevault> yeah, at least once you know which registers you can fuck up
19:49:00 <Ermine> I guess QBE is resonsible for stack structure in hare?
19:49:00 <ddevault> atm I just save everything though
19:49:00 <ddevault> Ermine: yeah
19:49:00 <gog> what about all the E
19:49:00 <ddevault> the E?
19:50:00 <geist> note that riscv is more or less the same idea: one big entry point, a register describing what the exception was for, an at least one aux register with some additional details
19:50:00 <gog> EAX EBX
19:50:00 <geist> i just totally dig ESR: 'exception syndrome register'
19:50:00 <geist> syndrome sounds so hard core
19:50:00 <ddevault> feel free to teach your aarch64 assembler to call x0 eax
19:50:00 <Bitweasil> I just wish someone would bolt a M series interrupt system on the A series cores. :/
19:50:00 <Bitweasil> 1024 vectors or something silly? Yes, please!
19:50:00 <gog> dang
19:50:00 <geist> ddevault: that's what rosetta 2 does!
19:50:00 * ddevault shudders
19:50:00 <Bitweasil> I half expected Apple to have done something like that with the M1, but I don't think they did.
19:51:00 <gog> what would one even use 1024 interrupt vectors for
19:51:00 <Bitweasil> ddevault, Rosetta 2 is *legitimately* fast.
19:51:00 <ddevault> do I implement stack frame walking
19:51:00 <ddevault> or do I add a bunch of printf debugging
19:51:00 <ddevault> choices, choices...
19:51:00 <geist> dunno if you read it but in the last month or so someone wrote up a blog post basically describing how rosetta 2 seems to work
19:51:00 <Bitweasil> gog, faster response to different types of hardware interrupt. ARM only had IRQ/FIQ, and practically most ARMv8 systems route FIQ to the secure mode, so you really only have one interrupt for "Hardware needs something!"
19:51:00 <geist> and it's quite straightforward. most of the speed is because M1 is a monster, and they added some instructions here or there that directly map to x86 things, like the EFLAGS layout
19:52:00 <Bitweasil> geist, no, do you have a link to that?
19:52:00 <gog> ohh ok
19:52:00 <geist> otherwise it's mostly directly translate x86 to ARM64
19:52:00 <geist> hmm
19:52:00 <ddevault> oh I see what this does
19:52:00 <ddevault> that's not gross
19:52:00 <ddevault> I thought it was an assembler or something
19:52:00 <geist> Bitweasil: i think
19:52:00 <bslsk05> ​ Why is Rosetta 2 fast? | dougallj
19:52:00 <Bitweasil> Sweet, thanks!
19:53:00 <geist> also some obvious and clever things like mapping the ARM text segment just next to the binary so it can directly reach into the data segment, etc
19:53:00 <gog> what was the cpu that did AoT bytecode translation, alpha?
19:53:00 <gog> also for x86
19:53:00 <geist> hmm, transmeta?
19:53:00 <gog> yes
19:54:00 <gog> well no i thought there was another
19:54:00 <geist> would be interesting to see if someone has actually fully decoded and documented the raw ISA for transmeta
19:54:00 <ddevault> so the answer is evidently a bunch of printfs
19:54:00 <geist> i think it was VLIW so probably wonky
19:54:00 <gog> printf debugging for the win
19:55:00 <Bitweasil> The only one I know of that did "translate x86 to native" in the hardware was Transmeta. Itanium, at least initially, had a x86 core laying around the die for running x86 code, though I think later they had a software translater?
19:55:00 <Bitweasil> I never did much with Itanium.
19:55:00 <geist> yah iirc the first version of itanium had a x86 decoder, but it ran like ass. basically like one instruction at a time, so it didn't really utilize the pipeline
19:55:00 <gog> i thought there was something earlier than those even
19:56:00 <gog> ok the alpha one was software
19:56:00 <ddevault> oh duh my userspace executable is loaded at 0x7f0000
19:56:00 <geist> hmm, alphas thing was it had the PALcode, but that wasn't really bytecode
19:56:00 <ddevault> I guess the page mapping is wrong
19:56:00 <gog> yeah
19:56:00 <geist> more like you switched to a more native ISA that had even more access to things
19:56:00 <ddevault> L0 fault, hm
19:56:00 <ddevault> maybe my cache maintenance is wrong
19:57:00 <ddevault> let's throw in an invalidate everything and see what happens
19:57:00 <geist> yeah that can definitely happen when you're on real hardware
19:57:00 <geist> also remember the page tables on arm also fetch 'through' the cache, so you can get into trouble if you have stale shit in the cache when you turn the mmu on, etc
19:57:00 <froggey>
19:57:00 <bslsk05> ​ Crusoe Exposed: Transmeta TM5xxx Architecture 2
19:57:00 <geist> even if the cpu's cache is still disabled (iv'e been burned about that)
19:57:00 <ddevault> yes that was it
19:58:00 <raggi> re "why is rosetta fast" one of the big markers is how fast it does compilation toolchains - and the answer there is that it has the fattest loader cache that there's ever been, so it can spawn functional toolchain processes faster than native can load & link
19:58:00 <ddevault> enjoy all of my printfs
19:58:00 <geist> froggey: oooh neat
19:59:00 <ddevault> is there some kind of cache maintenance I need to do when writing to TTBR0_EL1
19:59:00 <geist> well no, but you should make sure your ducks are in a row
19:59:00 <geist> you probably want to fully invalidate the data cache befure turning on the mmu
19:59:00 <ddevault> currently I do tlbi vae1, x0 where x0 is the virtual address | (asid << 48) for the new mapping
20:00:00 <ddevault> yeah I do that, this is post-MMU initialization runtime mappings
20:00:00 <geist> if the cpu's cache is off, then as you're filling in the initial page tables, the tables are being written directly to ram
20:00:00 <geist> there are a couple bits at the bottom of TTBR0 you might want to look at but i think 0 is okay as the default for those?
20:00:00 <geist> which one is tlbi vae1?
20:01:00 <geist> ie, decode vae1 for me
20:01:00 <ddevault> well, I'll tell you what I think it should be, then dig out the manual and double check
20:01:00 <ddevault> it should invalidate a given page by its virtual address in a particular ASID
20:01:00 <Bitweasil> I miss realworldtech's articles. :(
20:01:00 <Bitweasil> It seems that all the "deep CPU technical" writers are getting hired off with NDAs.
20:02:00 <ddevault> I also do tlbi vale1, which does the same but only invalidates the bottom of the page tables
20:02:00 <Bitweasil> Anandtech's gotten worse too about it.
20:02:00 <ddevault> I do vae1 for L0/L1/L2 changes, and vale1 for L3 changes
20:02:00 <ddevault> see ARMARM page 5201; D8.13.5
20:02:00 <geist> double check that you dont need to shift the address by 12
20:02:00 <Bitweasil> ddevault, there are sequences in the manual for "Thou Shalt" when doing certain things.
20:02:00 <Bitweasil> I would suggest following those.
20:02:00 <ddevault> it's really unclear regarding shifts
20:03:00 <ddevault> Bitweasil: ack, will look for them
20:03:00 <Bitweasil> You're in Aarch64?
20:03:00 <geist> it ashouldn't be. it'll tell you what bits of the VA go in what bits of the register
20:03:00 <ddevault> yes, aarch64
20:03:00 <ddevault> correction: it is /abundantly/ clear
20:03:00 <ddevault> there are like 50 cache invalidation instructions and the manual goes into information overload
20:04:00 <geist> well, if you want to make it a little simpler: the tlbi and the cache instructions are different
20:04:00 <Bitweasil> Ehm. Sorry. You're writing aarch64 system code without the manual open to the relevant pages? :p
20:04:00 <geist> thoguh yes, tlb is really ac ache too, but they act fairly differently
20:04:00 <ddevault> the manual is 12,000 pages long, Bitweasil
20:04:00 <Bitweasil> And, yes, TLB is a separate thing from the cache ops.
20:04:00 <Bitweasil> I know.
20:05:00 <Bitweasil> I'm currently writing an ARMv8 emulator.
20:05:00 <ddevault> having it open is just the start
20:05:00 <Bitweasil> Believe me, I know.
20:05:00 <geist> Bitweasil: yeah i was pointing iut out to dennis95
20:05:00 <ddevault> finding what you actually need is the hard part
20:05:00 <geist> ddevault:
20:05:00 <geist> hi dennis95 ! sorry for tagging you
20:05:00 <Bitweasil> The table of contents is pretty good.
20:05:00 <ddevault> aye, it is
20:05:00 <ddevault> I know I'm probably at least in the right subchapter
20:05:00 <Bitweasil> I literally keep a Kobo Elipsa and koreader around for that sort of thing.
20:05:00 <Bitweasil> And it lives plugged in during the day, because it's constantly on.
20:05:00 <ddevault> I had a kobo elipsa until recently
20:06:00 <geist> at least it's nicely hyperlinked. a pdf reader that lets you go 'back' to the last page you were at helps immensely
20:06:00 <ddevault> if I had to read ARMARM in it I would jump in front of a train
20:06:00 <geist> since you can follow hyperlinks and then back out
20:06:00 <Bitweasil> ddevault, koreader.
20:06:00 <Bitweasil> The stock PDF reader sucks, yes.
20:06:00 <Bitweasil> It's fine for linear books.
20:06:00 <Bitweasil> But the third party reader is great.
20:06:00 <ddevault> ah fair point
20:06:00 <ddevault> did not try koreader
20:06:00 <ddevault> moot point anyway, it was stolen last week
20:06:00 <geist> yah dunno what platform you're on but the apple pdf viewer > *, on windows i've had good luck with okular
20:06:00 <Bitweasil> I've got a good expanding table of contents, back buttons, hyperlink following, the works.
20:06:00 <geist> and on linux i think evince is not great, but it seems to work
20:06:00 <Bitweasil> If you're on Apple, GoodReader is 100% worth the money.
20:07:00 <Bitweasil> (for iPad/iPhone)
20:07:00 <ddevault> >Arm recommends not invalidating entries that are not required to be invalidated to minimize the performance impact
20:07:00 <Bitweasil> The MacOS PDF reader is solid.
20:07:00 <ddevault> oh really
20:07:00 <ddevault> thank you arm
20:07:00 <Bitweasil> "If you flush all, don't come whining to us when your performance sucks."
20:08:00 <geist> yah that's where it also gets complex on arm. there are lots of areas where you can really fudge it and get even more performance than the default one-thing-at-a-time
20:08:00 <geist> but you're then playing with fire so it's always a hard call
20:08:00 <geist> on x86 a lot of this is just not on the table so you dont even have to think about it
20:08:00 <geist> (firing off multiple tlbis and then a single DSB is the biggie)
20:08:00 <Bitweasil> Yeah, thar be "implementation defined behavior."
20:09:00 <Bitweasil> It'll work, until it bites you in the rear in an insanely difficult to debug way. :/
20:09:00 <ddevault> okay I am not feeling this manual right now
20:09:00 <ddevault> let's invalidate everything on any mapping changes and come back later
20:09:00 <Bitweasil> It'll work...
20:09:00 <Bitweasil> The ARM manual is different from the Intel manual, though.
20:09:00 <ddevault> I am good programmer
20:09:00 <Bitweasil> Intel's stuff is "This is what we did, from your view."
20:09:00 <Bitweasil> ARM is "This is how to implement a spec-compliant chip."
20:10:00 <Bitweasil> So they're a different style of reading.
20:10:00 <Bitweasil> But the ARM one is a lot more "This is *exactly* how this instruction works," if you care.
20:10:00 <Bitweasil> Intel tends to gloss over corner cases that you shouldn't be hitting.
20:10:00 <geist> yah half the time i prefer one or the other, context sensitive
20:10:00 <Bitweasil> The Intel is better reading, tbh.
20:10:00 <ddevault> agreed with geist
20:10:00 <geist> kinda wish you had a treat me like i'm 5 version of the ARM manual for when you dont want to go into that much detail
20:10:00 <Bitweasil> ARM's manuals are like reading a legal document, mostly because they *are.*
20:10:00 <ddevault> the ARM manual is very very detailed, which is nice when you need to know a lot of detail
20:10:00 <geist> but the other half of the time the detail is very nice
20:10:00 <ddevault> but it /sucks/ when you don't need that much detail
20:11:00 <geist> i can tell you it gets better, but it's still a TOME
20:11:00 <Bitweasil> *shrug* You learn what you can skip.
20:11:00 <ddevault> and for the love of god split it up into volumes
20:11:00 <Bitweasil> Eh.
20:11:00 <geist> i want one of those standing pedestals with an ARM manual on it
20:11:00 <Bitweasil> It's no longer paper form.
20:11:00 <Bitweasil> I'm fine with one whopper of a PDF file.
20:11:00 <geist> with a little skull on the book bound in human flesh
20:11:00 <Bitweasil> Oh man.
20:11:00 <Bitweasil> Yeah.
20:11:00 <Bitweasil> That would be epic.
20:11:00 <ddevault> I can only open it firefox or xpdf, and only on a good day
20:11:00 <Bitweasil> Gold leaf decoration, artwork for each page...
20:12:00 <ddevault> all of the other readers I've tried choke and die
20:12:00 <Bitweasil> Metal clasp to bind it closed.
20:12:00 <geist> fWIW i've had no problem openign it with evince on linux, it's just not fast
20:12:00 <geist> ie, it doesn't crash on me
20:12:00 <geist> which some folks have reported
20:12:00 <Bitweasil> Evince should handle it on Linux, GoodReader will deal with it just fine on iOS.
20:12:00 <kaichiuchi> hi
20:12:00 <Bitweasil> Or get another Elipsa and put koreader on it. :)
20:12:00 <ddevault> evince is really slow with it at first
20:12:00 <ddevault> then after a minute or two it just crashes
20:12:00 <geist> yah i just can't imagine using it on an ipad or whatnot. may be nice for reading but what i do half the time is global searches, then hyperlink to it
20:13:00 <geist> ie, find the first hit of whatever i'm looking for and then there's usually a hyperlink to the description of the register, etc
20:13:00 <geist> seems like with an ipad or whatnot it's more good for linear or table of contents style searches
20:13:00 <geist> which would be pretty bad for something this big
20:14:00 <Bitweasil> geist, GoodReader lets you do exactly that.
20:14:00 <ddevault>
20:14:00 <bslsk05> ​ ~sircmpwn/helios: aarch64: the TLB isn't /that/ important, right - sourcehut git
20:14:00 <ddevault> fixed
20:14:00 <ddevault> moving right along
20:14:00 <Bitweasil> I do that constantly.
20:15:00 <Bitweasil> I'm not a "fill an iPad with a lot of apps" sort, but GoodReader of the current version is one I'll pay for easily.
20:19:00 <geist> possible the problem is the asid stuff you're passing in
20:20:00 <geist> also you have a DSB in those arch routines right?
20:20:00 <ddevault> nope
20:20:00 <ddevault> totally forgot it
20:20:00 <ddevault> probably related
20:20:00 <kaichiuchi> i have to ask a dumb question
20:20:00 <ddevault> will investigate later
20:20:00 <geist> yeah... that's like, super important
20:20:00 <geist> you *must* dsb after a tlbi
20:20:00 <ddevault> ack
20:20:00 <kaichiuchi> architecture abstraction is obviously possible, but it seems almost impossible
20:20:00 <ddevault> done for tonight but I'll see if that trivially solves my problem tomorrow
20:20:00 <geist> tlbi basically is non blocking, it only starts the transation, dsb waits until it completes
20:21:00 <geist> kaichiuchi: what do you mean?
20:21:00 <kaichiuchi> hm, i’ll do my best to explain this
20:21:00 <geist> depends at what level is what i think we'll get at. the trick is finding the particular layer to abstract at
20:22:00 <geist> as you get more and more lower level you'll find differences, but if you abstract at where arches tend to align then it works reasonably well
20:22:00 <geist> since clearly it's done all the time
20:23:00 <kaichiuchi> if you’re on x86-64, and you want to query, say, SMP status or something, whatever outside of the arch layer
20:23:00 <kaichiuchi> do people literally just have arch_smp_query or something
20:23:00 <geist> sure
20:24:00 <geist> stuff like 'arch_get_cpu_id' or whatnot is fairly abstractable, since it's generally assumed that there's a mechanism to get some sort of current cpu id
20:24:00 <ddevault> take a look at my code if you like
20:24:00 <bslsk05> ​ ~sircmpwn/helios - sourcehut git
20:24:00 <ddevault> pretty small kernel with x86_64 and aarch64 and pretty good separation of concerns
20:24:00 <kaichiuchi> but what if you’re on a system that doesn’t have SMP?
20:24:00 <ddevault> arch-specific stuff is in directories or files named +aarch64 or +x86_64
20:24:00 <geist> kaichiuchi: then you can just say 'there's one cpu'
20:24:00 <ddevault> well, you write an abstraction
20:24:00 <ddevault> yeah like that
20:24:00 <kaichiuchi> i assume it just returns NULL and you say “this architecture doesn’t support SMP so the” yes
20:25:00 <geist> sure
20:25:00 <geist> or SMP is active but there's only one core
20:25:00 <netbsduser> kaichiuchi: i have e.g. `CURCPU()` which is an inline function or macro (i forgot) that returns e.g. the current CPU
20:25:00 <kaichiuchi> the reason i’m being diligent is because i don’t want to get architecture abstraction totally ass backwards
20:25:00 <netbsduser> meanwhile there is e.g. a CPU count and an array of CPU objects
20:25:00 <geist> for a while it was really beneficial to have SMP and non SMP builds, but in general since most machines are SMP it's less of a concern now
20:25:00 <geist> vs say 10-15 years ago
20:25:00 <ddevault> I mean the basic approach is
20:25:00 <geist> but in that case a non smp build would be to just ifdef out parts and hard code things like 'there's one cpu'
20:25:00 <ddevault> start with one target platform, but be sensitive to what code you're writing is and is not arch-specific
20:26:00 <kaichiuchi> that’s what i’m thinking
20:26:00 <netbsduser> like many people do nowadays i have been experimenting with C++ which i find makes it harder to do this kind of abstraction if you follow the typical C++ approach
20:26:00 <ddevault> kprintf? scheduler? syscall implementation? not arch specific
20:26:00 <ddevault> syscall ABI? interrupt handlers? SMP? arch specific
20:26:00 <ddevault> and be careful not to mix the two when possible
20:26:00 <kaichiuchi> well, that’s another question
20:26:00 <geist> same with my stuff: i try to have very few #ifdef ARCH_FOO outside of this
20:26:00 <bslsk05> ​ lk/arch at master · littlekernel/lk · GitHub
20:26:00 <kaichiuchi> does kprintf try and detect if there’s a screen and print to that?
20:26:00 <geist> and try to abstract via arch specific headers, and arch specific implementations of things
20:27:00 <ddevault> no, it should work through an abstraction
20:27:00 <kaichiuchi> or does it do arch_printf()?
20:27:00 <ddevault> which might print to a screen, or to a serial port, or something else
20:27:00 <kaichiuchi> and how does that work with video drivers?
20:27:00 <geist> yeah or some sort of 'console_putc' or something
20:27:00 <ddevault> or struct console { write: *fn(buf: []u8) size }
20:27:00 <geist> that is then filled out at run time and/or statically with something that implements it for that machine
20:27:00 <geist> yah
20:27:00 <ddevault> video drivers would provide an implementation of that write function
20:27:00 <kaichiuchi> ahhhhhhhhh
20:28:00 <kaichiuchi> but then, for example
20:28:00 <geist> for LK for example there's a platform_putc() that gets filled in per platform (ie, this particular board, or a PC or whatnot) that knows what to do with it
20:28:00 <ddevault> (well, probably not exactly, but you get the idea)
20:28:00 <kaichiuchi> in arch_printf(), would I then look for a “video driver” and talk to it in some abstract way as well?
20:29:00 <geist> well, personally i wouldn't call it 'arch' because arch really has nothing to do with video or whatnot
20:29:00 <kaichiuchi> are there direct “classes” of types of drivers?
20:29:00 <kaichiuchi> right
20:29:00 <ddevault> these questions are pretty naive
20:29:00 <ddevault> just start writing code and you'll get the idea
20:29:00 <kaichiuchi> yeah that’s kind of the point.
20:29:00 <netbsduser> kaichiuchi: in my previous kernel i had 'classes' of device, quite literally
20:29:00 <geist> i tend to be more hard line about it: arch is things like x86 or arm. i have another layer (platform) that deals with 'a PC or a raspberry pi or whatnot'
20:29:00 <ddevault> here's my console drivers fwiw
20:29:00 <bslsk05> ​ ~sircmpwn/helios: arch/dev/ - sourcehut git
20:29:00 <geist> personally linux is a bad example here, since they toss IMO far too much stuff under 'arch'
20:29:00 <geist> whereas the bsds tend to be a bit more abstract about it
20:29:00 <kaichiuchi> geist: i think that’s a reasonable abstraction
20:30:00 <netbsduser> i decided to implement my drivers in objective-c to gain suitable benefits from the object-oriented approach and structured my drivers in an object-oriented fashion
20:30:00 <ddevault> I don't like the "I'm a raspberry pi" code
20:30:00 <ddevault> I like the "I have a device tree" code
20:30:00 <geist> yes, it depends on how generic you like things
20:31:00 <kaichiuchi> i see
20:31:00 <ddevault> recommendation: ignore everything netbsduser just said
20:31:00 <geist> and what the intent of the design is. keep in mind generic arm kernels are far more complicated than an x86-pc in a certain sense
20:31:00 <kaichiuchi> right, which is the “problem”
20:31:00 <kaichiuchi> i don’t know exactly how far I need to go
20:31:00 <ddevault> if I may venture a guess
20:31:00 <kaichiuchi> but i guess you cross that bridge when you get to it
20:31:00 <netbsduser> ddevault: why?
20:31:00 <ddevault> your kernel is not mature enough to be thinking about these questions right now
20:31:00 <geist> so for now i'd not get too worried about it. just where something seems to be arch specific, put it behind at least one layer of abstraction (arch_foo, etc)
20:31:00 <ddevault> if you are not really sure how printk links to your video driver, for instance
20:32:00 <geist> and things that seem to be specific to that device (the screen, the uart, etc) put it behind another layer
20:32:00 <geist> and then build from there
20:32:00 <ddevault> you'll probably paint yourself into a corner and add unnecessary complexity by going multiarch at this stage
20:32:00 <geist> if you at least dont slam everything together you'll have like 80% of your abstraction
20:32:00 <kaichiuchi> it’s not wrong to ask these questions out of curiosity
20:32:00 <gog> abstract your abstractions
20:32:00 <geist> might not be perfect but it's in the right question
20:32:00 <ddevault> sure, no problem
20:32:00 <gog> concrete your concretions
20:32:00 <kaichiuchi> but yes your point is well taken all the same
20:32:00 <ddevault> just letting you know that you may not be well-equipped to apply the answers right now
20:32:00 <geist> like even an abstraction like 'console_write' that kprintf goes do is a good starting point
20:33:00 <geist> ie, dont put code that directly fiddles with your VGA screen in the body of kprintf
20:33:00 <ddevault> that much is true :)
20:33:00 <kaichiuchi> geist: precisely what I want to avoid
20:33:00 <ddevault> like I said before, try to notice when you're writing arch-specific or portable code and organize them into separate places
20:33:00 <geist> and dont fiddle directly with ARM registers in your main(), etc
20:33:00 <geist> yep
20:33:00 <geist> that sort of things is usually what i go for directly if someone says 'take a look at this os written for X purpose'
20:33:00 <ddevault> if you do that you'll have a good start for when you're ready to add a second arch
20:33:00 <geist> i like to see how mature the abstractions are
20:34:00 <kaichiuchi> I just know that people starting out directly writing to B800 or whatever the hell the VGA device port is
20:34:00 <kaichiuchi> which I guess is fine
20:34:00 <netbsduser> that's a rather retro way to go
20:34:00 <geist> right. and that's of course fine, but it generally shows a level of knowledge/experience of the author
20:34:00 <ddevault> the mark of a promising project is how quickly they stop writing directly to b8000
20:34:00 <kaichiuchi> but I don’t know how I can print through a video driver
20:34:00 <geist> since even simple abstractions are worth a lot, and not really expensive at all
20:34:00 <ddevault> you can't write a video driver at this point
20:35:00 <geist> yah
20:35:00 <ddevault> I've been working on my kernel for 9 months and I don't have a video driver
20:35:00 <ddevault> all serial all the time baby
20:35:00 <kaichiuchi> fair
20:35:00 <ddevault> you have many yaks to shave
20:35:00 <kof123> bad gog! abstract your concretions, concrete your abstractions...three times in the air. then you will surely have the magical osdev stone
20:35:00 <geist> yah which is also why an abstraction of at least 'put the characters through this interface' helps, because serial and a screen look the same to the caller
20:35:00 <Bitweasil> Banging on 3F8... or the MMIO version (probably a PL011?)...
20:36:00 <netbsduser> i target the limine bootloader for amd64, which helpfully provides you with a terminal so you can get to work immediately on the actually interesting things
20:36:00 <geist> as a side note this is also why vt100 still lives, since you can now abstract something that is a screen into something that looks like a serial port
20:36:00 <Bitweasil> I'd rather have serial than video for bringup anyway. You can log serial to a file and sort through verbose amounts of crap without your remote end having to buffer it.
20:36:00 <ddevault> I actually do still have a console writing to 0xb8000 so I can test on PCs without a serial port (i.e. almost all of them)
20:36:00 <ddevault> but yeah I usually test with qemu via 0x3F8 on x86_64
20:36:00 <geist> yah and PCs in particular have a bad tendency to triple fault and clear the screen
20:36:00 <geist> so you can't see the last thing written
20:36:00 <Bitweasil> Yup.
20:37:00 <Bitweasil> Been there, "Wait, you... *ugh.*"
20:37:00 <Bitweasil> "triple fault" vs "infinite interrupt loop" - I get why they did it, but it's kind of a nasty behavior.
20:37:00 <ddevault> manually patching your fault handler to write less stuff so that you can see what your kernel did on real hardware is a mood
20:37:00 <Bitweasil> Zero warning reboot with no way to evaluate state.
20:37:00 <Bitweasil> And no way I'm aware of to really trap it (on hardware).
20:38:00 <geist> kaichiuchi: anyway, you're asking the right questions, it's good stuff
20:38:00 * Bitweasil curls up with the ARMv7 PMSA section.
20:39:00 * gog pets Bitweasil
20:40:00 <Bitweasil> Man, these inversions are rough on the solar production. :/
20:41:00 <Bitweasil> And my backup battery bank inverter idles at 100W.
22:26:00 <heat> oh wow I sleep for a bit after almost 24h of no sleep and suddenly everyone decides to do osdev and talk about everything interesting
22:26:00 <heat> smh
22:26:00 <zid> Well it stands to reason
22:26:00 <zid> there was space left in the channel because nobody was talking about how awesome rust is
22:26:00 <heat> ruuuuus
22:27:00 <gog> rust rust rust
22:27:00 <heat> kaichiuchi, have you started
22:28:00 * sortie merges DNS support to his libc.
22:28:00 <gog> i don't know how to program
22:28:00 <sortie> Date: Tue Jul 26 22:47:51 2016 +0200
22:28:00 <sortie> This took a lil while
22:28:00 <gog> aaaay
22:28:00 <zid> I was just thinking of improving my network stack
22:29:00 <zid> then I remembered I forgot how to set up my bridge.. again
22:29:00 <zid> bridges are lame
22:29:00 <sortie> macvtap is the cool thing I use to put one of my VMs directly on the internet
22:30:00 <zid> Yea I'd use my bridge, if I could remember the command
22:30:00 <zid> I always forget which things need which addresses
22:30:00 <zid> whether I run dhcpcd on br0 or the devices, blah blah
22:30:00 <zid> I should have written it down
22:30:00 <kaichiuchi> heat: eventually
22:31:00 <heat> kaichiuchi, start now
22:31:00 <heat> just do it like the checkmark company
22:31:00 <heat> or the screaming actor
22:31:00 <heat> zid, you ran it the other day when you were trying to impress me
22:31:00 <heat> i told you you should get a script
22:34:00 <kaichiuchi> right now, I'm trying to figure out how I'm going to have someone home to pick up my packages tomorrow
22:34:00 <kaichiuchi> because from _two separate carriers_, both packages were delayed
22:35:00 <zid> heat: yes I couldn't remember then either
22:35:00 <zid> Also my testing is fucked if I actually enable VBE cus it just hogs all the output and can't be updated easily :P
22:36:00 <zid> I 'fixed' that by making it dump to serial port instead but a) I don't remember if it's in this tree and b) I don't remember how to log serial
22:41:00 <heat> qemu -serial stdio > log
22:42:00 <heat> there should be a "output-to-file" option but who tf cares
22:44:00 <zid> noo I need to inotify | watch | tee | cat | vim it
23:48:00 * sortie merges ifconfig(8).
23:52:00 <mjg> so are you faster than onyx?
23:57:00 <zid> my code is way faster than onyx
23:57:00 <zid> it's smaller so it downloads faster, and it's C so it compiles faster, and does nothing so the main loop is faster
23:58:00 <gog> sophia is faster than boros
23:58:00 <gog> it does even less
23:58:00 <zid> is that because the entire codebase is _start: jmp -2; hlt
23:58:00 <gog> excuse me it's more than that
23:58:00 <gog> i also can print hello world
23:58:00 <zid> nice
23:58:00 <gog> i wrote a very big printf
23:59:00 <gog> and only use a little of it
23:59:00 <moon-child> I can print hello world in multiple windows
23:59:00 <zid> I need to write proper dmesg code and have the screen automatically update and stuff but.. meh
23:59:00 <moon-child> dmesg goes to its own lil window
23:59:00 <zid> I'd need timers or something for that
23:59:00 <zid> and that sounds like a pain