Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev&y=19&m=5&d=23

Thursday, 23 May 2019

12:19:56 <blackandblue> Hi folks, just curious, what OS do you guys run on PCs/Laptops
12:21:21 <blackandblue> or in other words, whats your setup like for osdev?
12:21:32 <pterp> OSX Mojave
12:21:42 <Mutabah> Ubuntu
12:21:58 <eryjus> fedora
12:22:11 <blackandblue> Mutabah and eryjus: in VM?
12:22:48 <eryjus> i was but realized my vpn connection was too slow for my liking.
12:22:59 <eryjus> so now a bare metal install
12:23:15 <blackandblue> I see. dual booting now?
12:23:44 <eryjus> nope. i have a virtualbox Windows install for the 2 applications i use occasionally
12:24:10 <blackandblue> may I know your specs
12:24:15 <blackandblue> windows in vm gets so slow for me
12:25:14 <Mutabah> Native here
12:25:15 <blackandblue> pterp: I dont have apple mac so I guess macOS is out of option
12:25:31 <blackandblue> Mutabah: cool. so windows in vm?
12:25:31 * Mutabah is away (Work)
12:25:33 <blackandblue> or dual boot
12:25:37 <eryjus> hmmm... hp zbook wtih 8GB ram.. nothing special... but list i said, I only use 2 windows apps and only occasionally
12:26:34 <blackandblue> eryjus: is it true that fedora breaks often during upgrades (6m cycle)
12:28:41 <blackandblue> I tried kernel development on a vm. compiling is so slow :(
12:28:44 <eryjus> the only challenge I ever had was from 14-15 (it's 30 now).. and a distro-sync fixed my problems
12:29:09 <blackandblue> eryjus: did you try other distros like ubuntu ?
12:29:14 <blackandblue> why fedora just curious
12:29:40 <eryjus> however, it is an aggressive upgrade cycle.. if you cannot stomach that, then I recommend CentOS which is the same code base with a far more "lazy" upgrade schedule
12:30:13 <blackandblue> yea I heard bad stories of fedora regarding stability
12:30:27 <eryjus> blackandblue, I prefer rpm-based distros.. I find them easier to work with -- and I will have others that will challenge me on that
12:30:40 <eryjus> but it's largely what I am most familiar with
12:31:11 <blackandblue> how so? I find ubuntu (.deb) and arch's AUR packages to be available almost everywhere
12:31:13 <eryjus> and as for why... i like the aggressive upgrade schedule
12:32:03 <eryjus> it's simple really -- it's what I started with and I have never changed.
12:33:12 <eryjus> chalk it up to personal preference and an old man wanting to invest his time in his own OS than learning a new host system
12:34:59 <eryjus> blackandblue, going back to an earlier comment, my vm when I was using it was 6 CPUs and 12GB ram; but I have a good sized vmware esxi server
12:35:18 <blackandblue> I see
12:38:16 <blackandblue> a bit hesitant to switch to linux full time due to lack of applications support
12:38:27 <blackandblue> vm is getting sluggish :(
12:38:30 <blackandblue> not sure what to do
12:38:46 <eryjus> dual boot until you are comfortable
12:39:28 * eryjus assumes there is enough free disk space to resize and install linux
12:40:09 <blackandblue> I am comfortable with arch linux but its just not that stable. f.e. qemu instances crashes etc.
12:40:20 <blackandblue> so I would go for ubuntu as an alternate
12:42:26 <eryjus> for the record, the only 2 applications I want for are my vmware console (which I can replace with HTML5 when I am able to upgrade my hardware and therefore upgrade my vmware software -- and I am considering Xen as a replacement) and my drobo (SAN) management software
12:44:25 <blackandblue> vmware console in a virtualbox vm?
12:44:27 <blackandblue> dont get it
12:45:45 <eryjus> i have a dedicated vmware server. I need a windows-based console to administer it (create VMs, etc). So, I run this windows console in virtualbox running in fedora.
01:16:33 <doug16k> I run qemu vms for weeks. linux doesn't crash all the time, you probably have a hardware problem
01:17:13 <doug16k> or overclocked
01:22:22 <doug16k> it boggles my mind that developers would ever overclock, in a million years. every little bug would have me wondering if it is the overclock
01:22:29 <zid> it isn't
01:22:32 <zid> don't worry
01:23:21 <zid> You don't go from 100% stable to 99.0% stable, you go from 100% to '10% below where I can make it 99.9% stable'
01:23:27 <zid> which is also 100% stable
01:23:50 <zid> I physically can't make my cpu fault, can't get enough cooling onto it to try
01:27:03 <adu> good ol' 80-20
01:27:34 <zid> If you're worried about an overclock on a top-binned part, why aren't you worried about default clocks on a potentially terrible part? :P
01:28:06 <zid> like, my bin is *so* good I physically can't crash the chip. Maybe your default clocks are more marginal than mine because your bin is bad
01:30:02 <zid> If you test where your chip fails it's easier to know where you are within your known good range at the clocks you run it at
01:31:08 <zid> I know for example than one of my ddr kits is just barely unstable at 850MHz, so anything significantly below that is 1-(1e-100) levels of stability
01:34:20 <zid> Or maybe it's unstable at 804MHz and they never should have rated it 800MHz to begin with, their factory was just cold and you're risking it on a hot day.
01:34:26 <zid> Unless you test you can't know
01:52:55 <doug16k> overclocking is fine. overclocking and having things crash and saying those things are unstable is not right
01:55:26 <doug16k> another way to look at it is, if you want knowledgeable people to listen to your problem reports, it is your responsibility to make sure it still happens on defaults. I never overclock anymore because I learned that no matter how thoroughly you test it, there will eventually be some subtle loop on a particular sequence of data that something does that makes it malfunction.
01:56:26 <doug16k> i've seen where I could test it endlessly with no problem, and unzip test tons of different zip files, perfect. one particular zip file, always crc error. on defaults was fine. never again after that
01:56:57 <doug16k> if it worked faster it would be sold as a faster one and get more money
03:43:32 <remexre> is there an equivalent of fxsave that works on ymm registers? or does it and I'm just bad at reading docs :P
03:55:45 <adu> according to studfiles, if the cpuid supports both YMM and XSAVE, then it should support what you're talking about
04:09:14 <adu> remexre: https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-intrinsics-for-saving-and-restoring-the-extended-processor-states
04:10:33 <remexre> adu: oh, cool; thanks!
04:27:38 <doug16k> remexre, in decreasing order of desirability: xsaves/xsaveopt/xsavec/xsave/fxsave
04:28:13 <doug16k> xsaves won't work in a vm though, only on newish bare hardware, so skip to xsaveopt if you are focusing on vm at first
04:29:58 <doug16k> xsaves has every trick in the book to speed it up. xsaveopt has all except compact format, rest have missing either init or modified optimization, bare xsave has none of the optimizations
04:30:46 <doug16k> note that there is a 64 version of each of those. example: xsaves64
04:31:09 <doug16k> it changes the format of fpu operand and instruction pointers
04:36:10 <doug16k> the different versions of the instruction have some mix of compact format, init optimization (very efficiently representing zero registers without doing full load for each one), modified optimization (not even updating the save area for the save if that register didnt change since context load)
04:36:54 <doug16k> it tries so hard it minimizes the number of store ops and minimizes number of cache lines touched
04:37:52 <doug16k> number of store and load ops*
04:43:50 <remexre> doug16k: okay, thanks!
11:50:22 <pterp> Do forked processes share the virtual address space?
11:51:25 <zid> share? no
11:51:35 <pterp> then dow do they share the heap?
11:51:44 <zid> what
11:52:01 <pterp> if malloc allcates new maemory after the fork, the other process wont get it.
11:52:17 <zid> Which fits what I said
11:52:22 <pterp> don't forked processes share the heap
11:52:31 <zid> I just said they didn't
11:52:37 <zid> fork makes new processes
11:52:53 <zid> "fork() creates a new process by duplicating the calling process"
11:53:00 <bcos> "Fork" mostly means converting the original virtual address space into a pair of "copy on write" copies; so neither process ends up using the original virtual address space
11:53:27 <zid> optimizations exist, ut it won't help you not confuse what fork does
11:53:32 <pterp> Why wouldn't the parent keep the original?
11:53:55 <bcos> How can parent keep the original?
11:54:02 <zid> ..original what?
11:54:09 <bcos> (the original virtual address space)
11:54:12 <pterp> address space
11:54:46 <pterp> nvm, i see why parent doesn't keep the original.
11:54:49 <zid> what
11:54:50 <zid> the fuc
11:55:02 <pterp> zid: what
11:55:09 <zid> you're making *no* sense, at all
11:55:19 <pterp> what's confusing?
11:55:22 <zid> all of it
11:56:01 <zid> You keep using pronouns in weird ways, so the sentenes themselves are confusing, and you don't seem to know what an address space *or* a process is, and resultantly, how they relate
11:56:10 <zid> so your questions, even if I think I understand them, are very muddled
11:58:47 <pterp> i know what an address space and a process are. An address space is a set of mappings from virtual addresses to physical addresses, divided up into blocks called "pages", which can have varying permissions or not be present at all, and a process is (at lest what needs to be saved on task switch), an address space and register values (including instruction pointer)
11:59:23 <bcos> Hrm - let's not bring register values into this.. ;-)
11:59:50 <zid> now, keeping those things in mind, ask your questions again
12:00:09 <zid> [12:50] <pterp> Do forked processes share the virtual address space?
12:00:24 <zid> "a process is ... an address space"
12:00:58 <zid> It can't be both, can it. Either a process is a unique address space, or all processes share address spaces.
12:01:22 * bcos would say a process is a container that contains a virtual address space, one or more threads, plus resources common to those threads
12:01:38 <zid> 12:53] <pterp> Why wouldn't the parent keep the original?
12:01:48 <zid> so now the parent's address space has been... deleted?
12:01:55 <zid> then it's stopped running?
12:02:07 <zid> all very confusing, considering you claim to know what all these words mean, I hope you can see that
12:03:09 <bcos> Original virtual address space is converted/transformed into a pair of "copy on write" copies; such that both copies are effectively equal and neither copy is the same as the original (without the "copy on write everything")
12:03:26 <zid> bcos: That's an optimization and just clouds the issue imo
12:03:47 <zid> it might make you think that the address spaces are shared on a logical level, which they're not
12:03:58 <pterp> I though the parent kept the original address space and the child ony got a copy-on write, but i realize the parent doesn't keep the original, as if it made any writes, tne child would see them, even though it shouldn't. That's why i asked that 12:53 question, but i answerd it myself.
12:04:06 <bcos> Everything is an opitmisation that "clouds" the issue. All we can say is "there's things and stuff that do things"?
12:04:20 <zid> No, you say it has the semantics it has.
12:04:39 <zid> If you don't understand the semantics, throwing in optimizations that *look* like they violate the semantics but don't really, is going to end in bad times(tm)
12:05:03 <bcos> pterp: Would also recommend reading this: https://www.microsoft.com/en-us/research/publication/a-fork-in-the-road/
12:05:20 <zid> The literally hundred people this year in ##asm asking how 'and rsp, ~0xF' encodes the 'not' into the instruction is good evidence of that for me :P
12:07:49 <bcos> zid: If it's in ##asm, it's probably the same 2 people using different nicks, who keep forgetting and asking again later.. ;-)
12:09:20 <zid> ##asm is full of crazies, it's sort of interesting in a carcrash sort of way
12:09:43 <zid> full of the same sorts of people who'd make timecube.com or send 40 page emails to astrophysicists about how the universe is something else
12:11:18 * bcos wants the universe to be an continual "expand -> contract -> crunch -> bang" cycle
12:11:38 <pterp> not using fork, huh i had that sort of idea too, (do posix_spawn), but also pushing threads (a function that started a new thread at a specified address, so you'd do thread_at(myfunc))
12:12:05 <zid> if it helps, I watched a talk and he claimed the patterns in the cbmr field sort of look like what you'd expect after the universe dies
12:12:20 <zid> so maybe it just restarts itself on a different scale periodically after it goes dead
12:12:55 <zid> so no crunch, you just shift the decimal point down a lot so the 'expanded' universe is now the singularity start of the next one
07:24:27 <mquy90> bcos, doug16k, I implement a dead simple ata driver, I create an empty disk, but 1st sector's data is not 0 https://github.com/MQuy/mos/blob/master/src/kernel/devices/ata.c#L60
07:25:14 <mquy90> https://github.com/MQuy/mos/blob/master/src/build.sh#L59 qemu config
07:26:50 <bcos> mquy90: Cool. Next step is that you have to do some kind of "happy dance" to celibrate :-)
07:27:13 <mquy90> but it seems it is not correct
07:27:20 <bcos> (it's always nice the first time you get something to actually work)
07:27:23 <mquy90> my data I got it is not all empt
07:28:20 <mquy90> yeah, kinda "happy dance" :D
07:28:54 <bcos> Sooner or later you'll want code to auto-detect which kind/s of devices are plugged into the ATA controller, and get their details (e.g. send "identify device" command to them)
07:29:25 <bcos> ..and modify it so that the same driver can work when there's more than one ATA controller
07:29:43 <mquy90> can you help me to check is there anything wrong in my code? my assumption is all bytes are 0x0 (1st sector, I create an empty disk)
07:30:06 <mquy90> but what I got are random values
07:31:39 <bcos> You've mostly assumed that the hard disk is setup for LBA-24 (but hard disk might be using CHS or LBA-48, so...)
07:32:55 <mquy90> I thought that by default it is qemu-system-i386 -cdrom mos.iso -hda hdd.img -m 64M
07:33:03 <bcos> D'oh - it's LBA-28
07:34:01 <bcos> For a tiny (64 MiB) hard drive, I wouldn't be too surprised of Qemu used CHS as the initial default
07:34:33 <bcos> (it'd be chronologically correct at least - hard drives were much bigger when LBA28 got invented)
07:36:36 <mquy90> I increase the size to 256MB, it is the same. Hum, I might did something wrong
07:37:23 <bcos> Hrm. If the entire disk is zeros, then getting the block number wrong should still give you zeros if you don't get a "sector not found" error
07:38:28 <bcos> ..you don't really check the status register - probably should (otherwise you can't really know if the "read" command worked or not)
07:38:32 <mquy90> yeah, I read 1st sector, I think it should be the correct block
07:39:12 <bcos> Well, you can't know if it's correct block. If it's LBA48 the drive might be waiting for the second half of the block number and might not of started the command at all
07:41:44 <doug16k> mquy90, you should not put hardcoded hex constants throughout a driver. do you know what 0x1F6 does? I doubt it. It is the drive select register. what's 0xE0? I know from looking at my named constants that it sets the LBA bit. do you know what the top bit is? It switches the drive to which the other registers are connected
07:42:02 <bcos> Hrm - the "if(ata_polling() )" might be in the wrong place too - I think you send the command, wait for IRQ, then read however many sectors
07:42:17 <bcos> (with the "inw" thing)
07:42:37 <doug16k> oops editing error. 1f6 witches the drive (master/slave)
07:42:57 <mquy90> 0x1F6 to select master, which I config in qemu
07:43:18 <doug16k> totally hardcoded for just one controller too
07:43:25 <doug16k> have two controllers? tough!
07:43:40 <mquy90> ah, I just want to test to see it works :D
07:44:01 <mquy90> will improve it later
07:44:37 <doug16k> ok, just concerned if you continue with that practice you will end up with incomprehensible drivers for less simple devices
07:44:44 <bcos> mquy90: I'd recommend saying "Yay, that kind of worked sort of"; and then moving on to auto-detecting if devices (hard drive, CD) are plugged in where and the "identify device" command
07:46:01 <doug16k> writing a driver is hard enough when everything has a name and you can tell what it is doing by looking at the english words on the lines of code
07:47:13 <mquy90> I know what you mean, I totally agree :+1:
07:48:08 <zid> doug16k: Wait, my OS shouldn't be hardcoded for my exact hardware? crap
07:48:13 <zid> that sounds like work
07:49:00 <bauen1> wait, you aren't supposed to hardcode the mac of the qemu nic into your network stack ?
07:49:07 <mquy90> shit, it seems correct with bochs
07:49:09 <zid> bauen1: Wait, really!?
07:49:14 <bauen1> no :p
07:49:18 <zid> doublefuck
07:49:24 <mquy90> only having problem with qemu bcos haha
08:00:44 <bcos> Dammit.
08:01:41 <bcos> Linux locked up - had to give it some "REISUB" magic, only to reboot and see the lovely "file system not checked for 364 days" for every freaking partition :-(
08:02:24 <mquy90> you sound likes Brendan?
08:02:51 <bcos> Maybe
08:03:43 * bcos has been complaining about how silly "file system check on boot" is for servers for about a decade now..
08:04:15 <adu> bcos: have you heard linus talk about git?
08:04:33 <bcos> adu: No (I don't think so)
08:05:30 <adu> He talks about the reason why he started using md5/sha1 file hashes was because cvs/svn was notorious for not obeying the property "you get back the same data"
08:07:04 <adu> granted, doing the check on triple-striped raid might be a bit much
08:08:17 <adu> honestly, I've had similar experiences with HDFS on home grown hadoop clusters, with modern hardware
08:23:40 * bcos just assumes it's X11's fault when Linux locks up - easier that way ;-)
08:24:05 <zid> sysrq it
08:24:24 <zid> k should murder X
08:24:55 <zid> might need to r first
08:25:36 <bcos> I gave it the REISUB - probably should reboot this thing every few months anyway
08:26:20 <zid> k is the best thing ever though
08:27:28 <bcos> (more correctly; I should stop being lazy and buy another server - this one is probably running out of life expectancy)
08:29:16 <bcos> Hrm. I think this machine has beein running 24/7 for about 9 years now (!)
08:30:36 <mquy90> is it ok to ask what are you doing bcos?
08:31:03 * bcos is thinking about getting some sleep before the sun rises
08:32:28 <adu> that's a good uptime
08:33:08 <mquy90> I wonder os dev is your hobby or job? anyone here?
08:33:31 <adu> I have never made an OS
08:34:07 <eryjus> hobby
08:34:08 <bcos> adu: That's not 9 years continuous - there's been a few power failure that lasts more than 45 minutes, several crashes, and several "reboot to test my code" in that time
08:34:18 <bcos> mquy90: Hobby here too
08:34:45 <adu> adu: my jobs usually involve managing fleets of servers
08:35:00 <adu> mquy90: ^
08:35:28 <mquy90> haha
08:37:31 <mquy90> hobby too, after writing my own language, I realized that there are a lot things I didn't understand (under the hood), therefore, I decided to learn os
08:37:47 <mquy90> one of my tough times :/
08:37:54 <adu> I guess I did write my own language
08:40:22 <adu> My interest recently is trying to figure out (1) how to build a UEFI-only OS that will survive even if Intel has it's way of removing BIOS from the face of the earth, and (2) how to build a 2-stage OS, where the first stage is litterally just a reimplementation of DUET
08:41:46 <adu> So consequently, I have learned a lot about BIOS+UEFI
08:54:04 <mquy90> I have no idea about UEFI, seem like a lot of things need to learn
09:14:44 <adu> mquy90: it really depends on which side of it you're on
09:15:29 <adu> if you're a bootloader, it's so easy, because it starts bootloaders in long mode