Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev2&y=22&m=5&d=11

Wednesday, 11 May 2022

01:50:00 * gamozo waves
01:50:00 <gamozo> What's everyone working on?
01:53:00 <klange> work
01:53:00 <Mutabah> It is currently Work [TM] O'Clock.
01:53:00 <Mutabah> But in non-work times, getting back into USB
01:55:00 <gamozo> That sounds fun! What parts of USB are you lookin to do?
01:55:00 <gamozo> I've never done NVMe before and I think that's on my mind
01:58:00 <Mutabah> Polishing MSC and HID support, might see about adding a UHCI driver too (currently only have OHCI)
01:58:00 <Mutabah> really should also try E/X
01:58:00 <Mutabah> (I.e. USB2/USB3)
02:06:00 <gamozo> I actually haven't done much USB beyond... looking through wireshark logs. Have the newer interfaces gotten nicer/better to use with software?
02:07:00 <Mutabah> iirc, yes.
02:07:00 <Mutabah> Although, the newer standards and backwards compat bring their own complexity
02:07:00 <Mutabah> and offloading adds complexity
02:09:00 <gamozo> Yeah, that's pretty fair. It's honestly pretty impressive how deep USBs backwards compat is at this point aha
02:09:00 <gamozo> I still have ancient USB 1.1 devices I'll just plug in and... use
02:10:00 <Mutabah> Most keyboards/mice are 1.1
02:10:00 <Mutabah> no need for anything faster
02:11:00 <gamozo> Huh, I wonder how much a USB 2.0 controller is compared to a USB 1.1 in terms of like, silicon/asic cost/complexity
02:12:00 <Mutabah> well, it has a substantially faster clock
02:12:00 <Mutabah> probably not an issue nowadays, but if you have a pre-existing device controller, why redesign it
02:13:00 <gamozo> Yeah, makes sense. I've always been really curious about the logistics of low-end chips
02:15:00 <gamozo> I always love when I see a new like, 4000s series logic chip show up on the market. Always kinda neat to see why someone starts up fab of such a new basic chip.
02:30:00 <sikkiladho> can we cast a function pointer to void without getting the warning?
02:31:00 <Mutabah> Depends on the compiler, and depends on the warning
02:31:00 <gamozo> Yeah what's the warning?
02:38:00 <kingoffrance> IIRC data pointers and function pointers were not guaranteed same size...but posix dlsym() inadvertently does that, so posix at least it must be permitted. see RATIONALE https://pubs.opengroup.org/onlinepubs/009604499/functions/dlsym.html
02:38:00 <bslsk05> ​pubs.opengroup.org: dlsym
02:38:00 <kingoffrance> im assuming meant void *, not invoke function function and (void) return value
02:39:00 <kingoffrance> "Due to the problem noted here, a future version may either add a new function to return function pointers, or the current interface may be deprecated in favor of two new functions: one that returns data pointers and the other that returns function pointers."
02:39:00 <kingoffrance> did they ever do that? lol
02:40:00 <kingoffrance> manpages (bsd and linux) say came from sunos
02:40:00 <kingoffrance> so "posix" perhaps just followed sun
02:43:00 <klange> No.
04:14:00 <kazinsal> ha. I'm reading the source for the FreeBSD BPF JIT and it's even more simple than I thought it would be
04:16:00 <kazinsal> it's just a two-pass compiler written with a bunch of macros that emit machine code
04:17:00 <moon-child> tcc: 'you guys are getting more than one pass??'
04:27:00 <klange> passes are for weenies, emit instructions directly as you parse
04:31:00 <moon-child> yep that's basically what tcc does
04:31:00 <moon-child> does have a separate pass for tokenising
04:33:00 <kazinsal> yeah the only reason freebsd does two passes is because BPF's instructions are fixed length so converting jump offsets to work with emitted x86 code needs a second pass to fix up the offsets
04:45:00 <heat> is it BPF or eBPF?
04:46:00 <kazinsal> BPF, eBPF is a linux thing
04:46:00 <heat> wouldn't be surprised to learn freebsd does it too :)
04:46:00 <heat> I think linux does extensive verification to make sure it's not getting pwned by a bpf/ebpf program
04:47:00 <moon-child> wiki sez windows has ebpf too
04:47:00 <kazinsal> yeah, that's one of the downsides to how thoroughly extended eBPF is
04:47:00 * kingoffrance .oO( TIL 0x8900 outputting 'S' 'h' 'u' 't' 'd' 'o' 'w' 'n' will shut down bochs? )
04:47:00 <kazinsal> and yeah, there is a user-mode implementation of eBPF for Windows
04:47:00 <kazinsal> think it has a kernel-mode driver component as well
04:48:00 <kazinsal> a few years ago at BSDCan someone did a talk/paper on eBPF in FreeBSD but I don't think there's been much progress since then out of lack of interest
04:49:00 <heat> surprising
04:49:00 <heat> like half of linux networking is just eBPF strapped to the kernel :P
04:49:00 <kazinsal> kind of makes me wonder how much of IOS-XE is implemented in hacky eBPF
04:50:00 <kazinsal> since it's basically Cisco IOS as a series of daemons on Linux
04:50:00 <kazinsal> but it's also got integrated docker and kvm-on-IOS support and stuff
04:50:00 <kazinsal> so there's probably a good bit of eBPF hooking involved
04:50:00 <heat> bpf + AF_PACKET?
05:26:00 <geist> oh hey so anywhere here like PCI?
05:26:00 <geist> like anyone here think they understand it?
05:26:00 <geist> found a fun thing at work with a Dell laptop. first time i've ever seen it (on x86)
05:27:00 <kazinsal> as in the bus or as in compliance standards? :P
05:28:00 <geist> as in think you've seen it all but are surprised
05:28:00 <kazinsal> lay it on me
05:28:00 <geist> seen a laptop with a second pci *segment*
05:28:00 <geist> ie, 0000:00.0 + as you expect
05:28:00 <geist> and then a *second* segment 1000:00.0
05:28:00 <kazinsal> oh whoa
05:29:00 <geist> (actually in this case it's curiously 1000:e1.0
05:29:00 <geist> )
05:29:00 <geist> nothing funny like thunderbolt (which would actually make sense)
05:29:00 <geist> just another tiger lake root port + a single nvme device
05:29:00 <kazinsal> well now that makes config space a bit more annoying
05:29:00 <geist> right?
05:29:00 <geist> it's valid, i just didn't honestly thing intel hardware had it in it
05:29:00 <kazinsal> thank god for having a hojillion bytes of virtual address space
05:29:00 <geist> i think all the seghmentation stuff is actually described in ACPI, etc
05:30:00 <geist> i didn't check but presumably it's a seperate ECAM, though why it doesn't start over at bus 0 i dunno
05:30:00 <kazinsal> yeah, you'd need to allocate another 256 megs of ECAM space
05:30:00 <kazinsal> since I think you only get one segment group per ECAM
05:30:00 <geist> i think so too
05:31:00 <kazinsal> admittedly I haven't implemented ECAM
05:31:00 <geist> possible they do something cheesy like say the ECAMs are on top of each other, so that they can get away with a single ECAM
05:31:00 <geist> and thus it's really a separate pci root port that somehow is considered another segment
05:31:00 <kazinsal> yeah, I guess if your BDF numbers can fit in there nicely
05:31:00 <kazinsal> but dang
05:31:00 <geist> i dont personally have access to the laptop (Dell Latitude 5420)
05:31:00 <kazinsal> that's neat
05:31:00 <geist> but it's just bog standard Tiger Lake
05:31:00 <kazinsal> ...hold on, let me check what my work laptop is
05:31:00 <geist> so had no idea you could configure it that way
05:32:00 <kazinsal> ah, 5320
05:32:00 <kazinsal> still kinda curious though. one moment
05:32:00 <geist> needless to say it messes up fuchsia's pci implementation, hence why was looking at it
05:32:00 <kazinsal> argh, why did my laptop reboot
05:32:00 <kazinsal> I may or may not have not saved things. damn sysadmin pushing stuff down over intune
05:32:00 <Jari--> I reboot on aptitude upgrade
05:33:00 <geist> i usually do only if some libs are floating around. i find em with 'sudo lsof | grep DEL'
05:33:00 <Jari--> I should design my 32-bit custom Fat File System to work on 64-bit fat table.
05:33:00 <Jari--> That shouldn't be hard from rewrite.
05:34:00 <geist> that starts to get pretty excessive though right? since to really need it you'd need more than 4bil entryes, which is by definition already 4GB of disk space
05:34:00 <geist> well, actually 16GB
05:34:00 <kazinsal> grr, can't see PCIe segment group ID from wmic
05:34:00 <kazinsal> but I do also have an interesting jump from bus 0 to bus 113
05:34:00 <geist> i created a 2TB FAT32 test image and by the it's already got like a 256MB FAT i believe, pretty slow to scan
05:35:00 <Jari--> geist: if I want to use modern disk drives
05:35:00 <geist> kazinsal: yah that's the other thing this dell does because of thunderbolt
05:35:00 <kazinsal> 113:0:0 is something realtek, hrm
05:35:00 <geist> all on segment [0000].... bus 0, bridge to bus 1-31, second bridge to bus 32-71 or something
05:35:00 <geist> basically the two bridges are thunderbolt controllers so they reserve like 30 something busses up front
05:35:00 <kazinsal> expresscard reader? I didn't know this thing had one of those
05:35:00 <geist> and then there's a device at like bus 71
05:36:00 <kazinsal> wait, no, it doesn't have an expresscard port. but it has an expresscard reader chip. nice market segmentation dell
05:36:00 <geist> actually that's the same thing kazinsal. hex 71 is 113 decimal
05:36:00 <geist> so it's likely this machine is set up the same way. does it have nvme? if so what bus number is it on?
05:37:00 <kazinsal> I think it does, one sec
05:37:00 <geist> that's whats on the separate segment on this one
05:37:00 <kazinsal> NVMe is saying it's on 3:0:0
05:37:00 <geist> ah
05:37:00 <kazinsal> Windows is reporting it as "NVMe BC711 NVMe SK hynix 256GB"
05:38:00 <geist> yah pretty standard
05:39:00 <kazinsal> let me see if I can actually run a proper lspci type util on this thing without the corporate antivirus ratting on me
05:39:00 <geist> BUSTED
05:41:00 <kazinsal> the windows lspci port isn't showing me the segment ID unfortunately
05:41:00 <kazinsal> but some interesting stuff here
05:41:00 <geist> yeah?
05:41:00 <kazinsal> a bunch of low level bridge information for that expresscard bridge
05:41:00 <geist> possible the segmentation numbering is linux side, but i dont see how they can interpret it any other way
05:42:00 <kazinsal> pretty much every device on this thing has a subsystem vendor of Dell (obviously) and the same subsystem device ID of 0A1F
05:43:00 <kazinsal> interestingly this lspci port doesn't seem to understand 64-bit BARs well so kind of SOL on the info of those
05:43:00 <kazinsal> I suspect Windows remaps them >4G
05:43:00 <geist> yeah
05:44:00 <kazinsal> also intersting is that there's an Intel Corporation Device A0EF at 00:14.2 that claims to be of the class "RAM memory"
05:45:00 <kazinsal> "Tiger Lake-LP Shared SRAM"
05:45:00 <geist> oh that's fun!
05:45:00 <geist> how big is the bar for that?
05:46:00 <kazinsal> doesn't say, there's two 64-bit non-prefetchable BARs that this port of pciutils doesn't grok
05:47:00 <geist> was going to say look at the bridge, but sicne it's on bus 0 it is special
05:48:00 <kazinsal> found a newer binary, let me see if this one will tell me
05:49:00 <kazinsal> okay, this thing's saying there's no BARs. let's ask windows directly instead
05:50:00 <kazinsal> CPU in question is an i5-1145G7 btw
05:50:00 <geist> i thin that's the same as the 5420
05:51:00 <kazinsal> alright so it looks like the device may be unconfigured in Windows, which is saying it's a "PCI standard RAM controller".
05:51:00 <kazinsal> I don't know what the standard for that class is but there's no driver loaded for it so... who knows.
05:55:00 <kazinsal> looks like each root port has 544 MiB of address space assigned to it
05:56:00 <kazinsal> contiguously
05:56:00 <kazinsal> ah, plus a bit extra underneath the root complex's address space
05:56:00 <kazinsal> which is oddly also where the RAID "chip" and the card reader live oh no this topology just got a lot more complex
05:57:00 <geist> interesting yeah
05:57:00 <geist> i also remember seeing on this one something funny like a raid 'nvme' device on bus 00
05:57:00 <geist> like it's a faked out raid thing that the cpu implements
05:57:00 <kazinsal> yeah, probably 89086:9A0B
05:57:00 <kazinsal> 8086*
05:58:00 <kazinsal> and/or 09AB, there's both on here
05:58:00 <kazinsal> https://usercontent.irccloud-cdn.com/file/yjPAj7yK/image.png
06:00:00 <kazinsal> interesting. bluetooth is hanging off XHCI
06:00:00 <kazinsal> wifi is not
07:19:00 <mrvn> kazinsal: 512MB + 32MB continously?
07:19:00 <kazinsal> Yeah
11:29:00 <heat> geist, fuck yeah pci segments
11:29:00 <heat> this proves I was right when I implemented full pci-e segment support
11:29:00 <heat> haterz said it couldn't be done
11:36:00 <mrvn> hah, he was just setting a challenge
17:21:00 <gorgonical> I was re-reading about OSes written in/on languages that use VMs, like Erlang. HydrOS is a concept that uses a microkernel in C and adds some built-ins to allow Erland code to interface with the hardware better.
17:22:00 <gorgonical> But all of that assumes you have a working BEAM implementation on your platform, incl. a C runtime and library. How much is that "cheating?"
17:25:00 <mrvn> You should look at mirage
17:25:00 <mrvn> https://github.com/mirage/mirage
17:26:00 <bslsk05> ​mirage/mirage - MirageOS is a library operating system that constructs unikernels (213 forks/1868 stargazers/ISC)
17:27:00 <GeDaMo> I believe Squeak Smalltalk is written in itself but it can generate the C for the VM
17:28:00 <gorgonical> mrvn: These unikernels target the hypervisor? Abstracts away some/all of the difficulty of hardware management?
17:29:00 <mrvn> gorgonical: yes, they just support the xen virtual hardware.
17:30:00 <gorgonical> An interesting concept. I have a vague understanding that OCaml can have a more direct interface with the hardware. Isn't there a way to compile ocaml to native code?
17:30:00 <mrvn> there is and that is what they do.
17:30:00 <gorgonical> Fascinating
17:30:00 <gorgonical> OCaml is such an interesting language
17:31:00 <mrvn> You always need some glue in asm or C to connect higher level languages to the hardware. That much cheating is unavoidable.
17:31:00 <gorgonical> I agree. I don't think it's cheating to use C e.g. to create your ASM stubs and to do very low things like load the GDT/IDT. But an entire C runtime to host a VM is not exactly what I had in mind
17:32:00 <gorgonical> Mind you, the core logic of the OS kernel *is* written in Erlang. The process management, paging, etc. is all written in Erlang. So that's where the argument can be made, I think
17:33:00 <mrvn> For ocaml there is a module ctypes though that handles the glue for you though. Can't remember if mirage uses that though.
17:35:00 <mrvn> I think XEN helps you with the GDT/IDT too, as in you don't have to deal with that at all. You provide a separate entry point to XEN that it calls for exceptions and interrupts and such.
17:35:00 <gorgonical> Wow
17:36:00 <gorgonical> I haven't dabbled much in paravirtualized stuff like Xen
17:36:00 <gorgonical> I mean, makes sense
17:36:00 <mrvn> The xen paravirtual interface realy takes away tons and tons of the hardware and replaces that with virtual interfaces.
17:36:00 <gorgonical> That's sort of the whole selling point of these virtual machines isn't it? I know at least a few projects that say explicitly it only works on QEMU under a specific profile
17:37:00 <mrvn> And not having to emulate all the crappy hardware interfaces it's so much faster.
17:38:00 <mrvn> Well, paravirtualized stuff is from a time before hardware VM support. It's kind of gotten lost in time now. The hardware has eliminated a lot of the inefficiencies that paravirtualized worked around.
17:38:00 <mrvn> qemu uses kvm
17:39:00 <j`ey> or hvf on macOS
17:53:00 <heat> you can also take away the hardware in qemu
17:53:00 <heat> use virtio everywhere
17:53:00 <heat> kvm extensions (or whatever those are called) also exist
17:54:00 <heat> kvm-clock and others
18:35:00 <geist> mrvn: or somewhat as i've mentioned before, riscv takes a different strategy and basically assumes paravirtualization is there always and thus the SBI firmware interface can be paravirtualized
18:55:00 <mrvn> xen paravirtualization abstracts the page tabels and such too.
18:56:00 <mrvn> Why is there no BYTE_BIT in limits.h?
18:56:00 <geist> yah. reminds me i should fiddle with that again
18:56:00 <mrvn> -,h
18:56:00 <geist> hmm, the compiler might provide that?
18:56:00 <geist> is it a builtin #define?
18:56:00 <mrvn> enum class byte : unsigned char {} ; since c++17
18:57:00 <mrvn> it's what you should use for buffers
18:57:00 <geist> right
18:57:00 <mrvn> "A byte is only a collection of bits, and the only operators defined for it are the bitwise ones. "
18:57:00 <geist> i think the advantage there is it doesn't promote or convert itself to int over using unsigned char
18:58:00 <mrvn> yep. no arithmetic by accident
19:04:00 <geist> but at least all of us that have dealt with arm are pretty well aware of unsigned char vs char
19:04:00 <geist> unless you've had to deal with one of the few arches that defines it the other way you probably never would really bump into that weird edge case of C
19:07:00 <mrvn> ppc too
19:09:00 <mrvn> std::byte also has an effect on aliasing. A char * can alias anything in a function call while std::byte* can only alias other std::byte*.
19:40:00 <heat> I don't get std::byte
19:40:00 <heat> it's truly useless
19:45:00 <geist> vs what?
19:46:00 <heat> unsigned char?
19:46:00 <geist> go back and re-read a bit of what mrvn wrote
19:46:00 <heat> oops I can't read
19:46:00 <heat> yeah good point
19:46:00 <heat> anyway
19:46:00 <heat> EMERGENCY
19:46:00 <heat> NVIDIA IS PUBLISHING THEIR LINUX DRIVERS AS OPEN SOURCE
19:46:00 <heat> MIT/GPLv2
19:47:00 <heat> THIS IS NOT A DRILL
19:47:00 <geist> 🤯
19:47:00 <heat> https://www.nvidia.com/download/driverResults.aspx/187834/en
19:47:00 <bslsk05> ​www.nvidia.com: Linux x64 (AMD64/EM64T) Display Driver | 515.43.04 | Linux 64-bit | NVIDIA
19:47:00 <FireFly> wha
19:47:00 <heat> "Published the source code to a variant of the NVIDIA Linux kernel modules dual-licensed as MIT/GPLv2. The source is available here: https://github.com/NVIDIA/open-gpu-kernel-modules"
19:47:00 <heat> it's not available yet
19:47:00 <geist> oh that's the kernel module
19:47:00 <geist> that's probalby because people have been bitching at them for years about it. the meat of it is still in user space
19:47:00 <geist> the krenel module i think mosty just implements the kernel side of NVRM. i've seen the internals of that. oh the horror
19:48:00 <heat> reclocking?
19:48:00 <geist> i mean it'll be helpful but i suspect 90% of the codeb ase is still user space
19:48:00 <geist> also 'to a variant' is interesting too. is that a nerved version?
19:48:00 <geist> also haha that github link doesn't resolve
19:49:00 <heat> nvidia is april fooling us in may
19:57:00 <Bitweasil> Hm? Drop the end quote on it.
19:57:00 <Bitweasil> https://github.com/NVIDIA/open-gpu-kernel-modules
19:57:00 <Bitweasil> Loads for me.
19:58:00 <Bitweasil> "Currently, the kernel modules can be built for x86_64 or aarch64." Huh, cool!
19:58:00 <Bitweasil> I guess the Jetson stuff is aarch64 with nVidia GPUs.
19:59:00 <heat> it wasn't loading, even without the quote
20:00:00 <clever> heat: https://github.com/NVIDIA/open-gpu-kernel-modules loads for me
20:00:00 <clever> 1 commit, 2 days old
20:00:00 <heat> yes, it's loading now
20:00:00 <Bitweasil> Ok.
20:01:00 <heat> it's written in C++
20:02:00 <heat> linus will lose his shit
20:03:00 <clever> lol
20:04:00 <heat> ok it's part C part C++
20:04:00 <heat> allegedly if you implement nvidia-drm and nvidia-uvm you can get that working on any kernel
20:12:00 <geist> actuially looks like the github literally went live in the last hour
21:37:00 <klys> https://github.com/NVIDIA/open-gpu-kernel-modules/blob/main/src/nvidia/src/kernel/platform/chipset/chipset_info.c#L268
21:37:00 <bslsk05> ​github.com: open-gpu-kernel-modules/chipset_info.c at main · NVIDIA/open-gpu-kernel-modules · GitHub
21:38:00 <klys> still looking for my card, geforce gtx 960
21:38:00 <klys> found geforce somewhere anyways
21:41:00 <heat_> they do workarounds based on the chipset?
21:41:00 <heat> i wish this was closed source now
21:41:00 <klange> all the fun stuff is in https://github.com/NVIDIA/open-gpu-kernel-modules/tree/main/src/nvidia/src/kernel/gpu
21:41:00 <bslsk05> ​github.com: open-gpu-kernel-modules/src/nvidia/src/kernel/gpu at main · NVIDIA/open-gpu-kernel-modules · GitHub
21:42:00 <heat> the non-standardness of this code is really horrifying
21:42:00 <heat> NV_FALSE
21:43:00 <heat> i get that they couldn't use any kernel headers to stay OS-independent, but it still hurts muh eyes
21:43:00 <klys> https://github.com/NVIDIA/open-gpu-kernel-modules/blob/main/src/nvidia/src/kernel/gpu/bus/arch/maxwell/kern_bus_gm200.c
21:43:00 <bslsk05> ​github.com: open-gpu-kernel-modules/kern_bus_gm200.c at main · NVIDIA/open-gpu-kernel-modules · GitHub
21:43:00 <klys> ...getting closer
21:44:00 <klange> There isn't going to be anything gtx960-related in here that isn't tangential
21:45:00 <klys> gtx9xx are maxwell
21:45:00 <klange> The important stuff is only turing or later - that's RTX
21:45:00 <klys> so will I have a working driver from this code? you seem like you may know already
21:46:00 <heat> i think so
21:47:00 <heat> it's not here just for show
21:48:00 <klys> ok 960 is gm206
21:51:00 <klys> the latest trouble I encountered when asking around at freedesktop/nouveau is that they can't control the fans
21:52:00 <klange> > In this open-source release, support for GeForce and Workstation GPUs is alpha quality. GeForce and Workstation users can use this driver on Turing and NVIDIA Ampere architecture GPUs to run Linux desktops
21:52:00 <klange> You are not going to get a working driver for a Maxwell device from this source release.
21:53:00 <klys> I guess
21:53:00 <klange> It's not even aimed at desktop users. It's for datacenter GPUs first and foremost.
21:53:00 <klys> thanks klange
21:54:00 <klange> Hopefully there's enough leftover stuff for those older GPUs that nouveau can work off of to fill in gaps.
22:10:00 <Griwes> You won't get maxwell because maxwell lacks the piece of hardware that's needed for this to be viable
22:10:00 <Griwes> Also really glad I can finally talk about this :P
22:10:00 <klys> uvm? icpu?
22:11:00 <klys> now what feature do you mean to say
22:11:00 <klange> they moved all the proprietary bits from software into a riscv core on the GPU, so of course they can release sources now, none of the really interesting parts are in there
22:12:00 <Griwes> Yes, but it also means you need to do so much less for a driver to work
22:12:00 <klys> this kind of thing isn't a `driver' all the way through. it's performing gl instructions on cores.
22:13:00 <klys> so, it's an operating system, in a way
22:14:00 <klys> the only thing it really lacks is a timer driven interrupt controller
22:14:00 <klys> otherwise you could target the device
22:20:00 <klys> graphitemaster's take ought to be insightful I'm hoping to hear from him
23:04:00 <graphitemaster> it's the kernel mode driver, the gl driver is all userspace driver software which is still closed
23:04:00 <graphitemaster> and yes, most of the special sauce has been moved into firmware which runs on risc-v cpus on the gpu itself and they just provide the firmware blobs now
23:05:00 <graphitemaster> but the good news is that nouveau can now use the firmware and kmd source code to implement reclocking finally
23:05:00 <graphitemaster> aside, nv also has an open source vulkan driver in the works (user space) so that may be released soonish
23:56:00 <geist> heat: yah i've had to deal with this nvrm stuff before
23:56:00 <geist> it's basically a full OS and hardware abstraction layer
23:57:00 <geist> i wonder what the firmware looks like on the riscv cores