Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev&y=18&m=5&d=17

Thursday, 17 May 2018

00:00:00 --- log: started osdev/18.05.17
00:14:39 --- quit: awang_ (Ping timeout: 256 seconds)
00:17:11 --- join: awang_ (~awang@rrcs-24-106-163-46.central.biz.rr.com) joined #osdev
00:34:17 --- join: m3nt4L (~asvos@2a02:587:a023:f000:3285:a9ff:fe8f:665d) joined #osdev
00:37:28 --- join: zeus1 (~zeus@197.239.32.36) joined #osdev
00:47:31 --- quit: CheckDavid (Quit: Connection closed for inactivity)
00:48:57 --- quit: zeus1 (Ping timeout: 255 seconds)
00:57:32 --- join: ACE_Recliner (~ACE_Recli@c-73-18-225-48.hsd1.mi.comcast.net) joined #osdev
01:07:02 --- join: elevated (~elevated@unaffiliated/elevated) joined #osdev
01:08:44 --- join: Olgierd (~Olgierd@ns364739.ip-91-121-210.eu) joined #osdev
01:08:44 --- quit: Olgierd (Changing host)
01:08:44 --- join: Olgierd (~Olgierd@unaffiliated/olgierd) joined #osdev
01:09:52 --- quit: ZephyrZ80 (Quit: Leaving)
01:14:25 --- join: zeus1 (~zeus@197.239.7.127) joined #osdev
01:18:42 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
01:20:45 --- quit: elevated (Quit: bye)
01:23:04 --- join: elevated (~elevated@unaffiliated/elevated) joined #osdev
01:25:00 --- quit: cirno_ (Remote host closed the connection)
01:25:32 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
01:26:44 --- quit: Goplat (Remote host closed the connection)
01:30:42 --- quit: cirno_ (Remote host closed the connection)
01:31:27 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
01:32:04 --- quit: mavhq (Read error: Connection reset by peer)
01:33:17 --- join: mavhq (~quassel@cpc77319-basf12-2-0-cust433.12-3.cable.virginm.net) joined #osdev
01:35:00 --- quit: jjuran (Read error: Connection reset by peer)
01:39:44 --- join: jjuran (~jjuran@c-73-132-80-121.hsd1.md.comcast.net) joined #osdev
01:41:52 <andrei-n> Hello. Do you think the Minix 1.0 book is good enough to learn operating systems? I think Minix 2 would be too complicated because it has to deal with at least 2 different architectures at the same time...
01:42:24 <Mutabah> depends - handling portablity is a very important thing
01:42:28 <klange> minix books teach you to make minix
01:42:55 <aalm> ^
01:43:42 <peterbjornx> i tried to keep portability in mind when starting work on my os
01:43:52 <andrei-n> klange, what do you mean? Does it teach some bad practices?
01:44:12 <peterbjornx> when i finally decided to do a port that greatly paid off
01:45:55 <klange> Using one resource in isolation won't offer you a wide range of options and understanding. The Minix books in particular teach you how to build Minix - they teach you to write a small, unix-like microkernel-based operating system; Tanenbaum has other books to cover wider topics.
01:48:02 --- join: `Guest00000 (~user@37.113.180.34) joined #osdev
01:51:12 <andrei-n> The problem is that there are too many resources and I don't know which one to focus on. At least Minix seems to teach a complete operating system (not like his Operating Systems book) and the xv6 book seems way too small for this subject... On the other hand, Minix 3 seems to describe too many things, and not all are useful...
01:51:30 <Mutabah> it's a BIIG domain
01:53:01 --- quit: hmmmm (Remote host closed the connection)
01:54:52 --- quit: gtucker (Quit: Coyote finally caught me)
01:55:35 --- join: gtucker` (gtucker@nat/collabora/x-xqnjlaqeloguuefu) joined #osdev
01:57:25 --- quit: gtucker` (Client Quit)
01:57:32 --- join: gtucker (gtucker@nat/collabora/x-ylaoofqgfxjpbeqw) joined #osdev
01:57:59 --- quit: Humble (Ping timeout: 248 seconds)
01:58:19 --- quit: zeus1 (Ping timeout: 264 seconds)
01:58:49 --- quit: oaken-source (Ping timeout: 240 seconds)
01:59:09 --- quit: gtucker (Client Quit)
01:59:15 --- join: kimundi (~Kimundi@i577A9194.versanet.de) joined #osdev
01:59:16 --- join: gtucker (gtucker@nat/collabora/x-ioojnwzoikjxoncw) joined #osdev
01:59:35 --- quit: Oxyd76 (Ping timeout: 248 seconds)
02:00:45 --- quit: cirno_ (Remote host closed the connection)
02:01:10 --- quit: gtucker (Client Quit)
02:01:17 --- join: gtucker (gtucker@nat/collabora/x-mvrdbfgbauemqumx) joined #osdev
02:01:22 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
02:06:41 --- join: CheckDavid (uid14990@gateway/web/irccloud.com/x-rvvplojbfskeghbt) joined #osdev
02:06:52 --- join: nortega (~nortega@gateway/tor-sasl/deathsbreed) joined #osdev
02:11:09 --- join: mntmn (sid100487@gateway/web/irccloud.com/x-sxuqsugvyhbcmvfi) joined #osdev
02:11:45 <andrei-n> Mutabah, So what would be the best book to start with? I forgot to mention, there is also mmurtl, but it's almost completely written in assembly.
02:11:48 --- join: Humble (~hchiramm@106.208.151.227) joined #osdev
02:11:50 --- quit: awang_ (Ping timeout: 256 seconds)
02:14:13 --- join: awang_ (~awang@rrcs-24-106-163-46.central.biz.rr.com) joined #osdev
02:19:10 --- quit: CheckDavid (Ping timeout: 255 seconds)
02:21:22 --- join: zeus1 (~zeus@197.239.7.34) joined #osdev
02:22:57 <klange> "why won't :qa! kill this vim" i say to myself three times before realizing this is not vim, it's an outdated build of my own editor
02:23:10 --- join: CheckDavid (uid14990@gateway/web/irccloud.com/x-mbukaueoaicbcndb) joined #osdev
02:24:14 --- quit: CheckDavid (Excess Flood)
02:24:40 --- join: CheckDavid (uid14990@gateway/web/irccloud.com/x-ytmufwstgeshycvz) joined #osdev
02:26:31 <Mutabah> lolololol
02:26:37 <Mutabah> silly silly klange
02:26:45 <Mutabah> that said, it's also an awesome error to make
02:28:49 --- quit: nightmared (Ping timeout: 240 seconds)
02:30:14 --- quit: ACE_Recliner (Ping timeout: 265 seconds)
02:30:27 --- quit: vaibhav (Quit: Leaving)
02:30:41 --- quit: cirno_ (Remote host closed the connection)
02:31:22 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
02:34:07 --- quit: elevated (Quit: bye)
02:35:42 --- join: oaken-source (~oaken-sou@141.89.226.146) joined #osdev
02:37:12 --- join: elevated (~elevated@unaffiliated/elevated) joined #osdev
02:43:46 --- quit: zeus1 (Ping timeout: 265 seconds)
02:51:01 --- join: zeus1 (~zeus@197.239.7.34) joined #osdev
03:00:41 --- quit: cirno_ (Remote host closed the connection)
03:01:19 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
03:02:15 --- quit: oaken-source (Quit: Lost terminal)
03:03:21 --- join: oaken-source (~oaken-sou@141.89.226.146) joined #osdev
03:05:38 --- join: hchiramm_ (~hchiramm@2405:204:d388:c0df:7c39:2975:387f:9995) joined #osdev
03:06:55 --- quit: Humble (Ping timeout: 256 seconds)
03:25:38 <izabera> ^Z kill %%
03:25:43 <izabera> oh wait you have no job control
03:26:40 --- quit: MarchHare (Ping timeout: 255 seconds)
03:27:27 <klange> izabera: this is linux, asshole
03:27:35 <izabera> then ^Z kill %%
03:28:03 <klange> that doesn't work on a process that turns off control sequences
03:29:35 <klange> (vim suspends itself on ^Z, and that's overridable - had a setup in some embedded systems where we had vim but it was configured to do something else on ^Z, pissed me off for ages until I convinced the colleague responsible that it was a bad idea)
03:30:07 --- join: spare (~user@unaffiliated/spareproject) joined #osdev
03:30:14 <izabera> what did it do?
03:30:29 <klange> I don't even remember, but it wasn't suspend.
03:30:40 <klange> Some random command. Something buffer related I think.
03:30:41 --- quit: cirno_ (Remote host closed the connection)
03:31:22 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
03:34:29 --- join: Prf_Jakob (jakob@volt/developer/jakob) joined #osdev
03:37:01 --- quit: pie_ (Ping timeout: 255 seconds)
03:38:52 --- quit: kimundi (Remote host closed the connection)
03:40:41 --- join: kimundi (~Kimundi@i577A9194.versanet.de) joined #osdev
03:44:21 --- quit: cirno_ (Remote host closed the connection)
03:44:53 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
03:48:06 --- quit: MaryJaneInChain (Ping timeout: 268 seconds)
03:53:32 --- join: quc (~quc@host-89-230-167-99.dynamic.mm.pl) joined #osdev
03:54:17 --- join: zesterer (~zesterer@cpc138506-newt42-2-0-cust207.19-3.cable.virginm.net) joined #osdev
03:58:04 --- quit: cirno_ (Remote host closed the connection)
03:58:34 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
04:00:41 --- quit: cirno_ (Remote host closed the connection)
04:01:17 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
04:08:19 --- quit: immibis (Ping timeout: 240 seconds)
04:16:48 --- quit: Arcaelyx (Ping timeout: 276 seconds)
04:18:47 --- quit: cirno_ (Ping timeout: 255 seconds)
04:19:49 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
04:21:45 <andrei-n> Thanks everyone for your help.
04:21:46 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
04:25:16 --- quit: banisterfiend (Client Quit)
04:27:37 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
04:30:41 --- quit: cirno_ (Remote host closed the connection)
04:31:03 --- quit: awang_ (Ping timeout: 248 seconds)
04:31:22 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
04:33:29 --- join: awang (~awang@rrcs-24-106-163-46.central.biz.rr.com) joined #osdev
04:33:58 --- quit: banisterfiend (Quit: My MacBook has gone to sleep. ZZZzzz…)
04:40:31 --- quit: nortega (Read error: Connection reset by peer)
04:40:31 --- quit: cirno_ (Read error: Connection reset by peer)
04:41:03 --- join: nortega (~nortega@gateway/tor-sasl/deathsbreed) joined #osdev
04:42:37 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
04:45:10 --- quit: nortega (Client Quit)
04:51:49 --- join: pie_ (~pie_@unaffiliated/pie-/x-0787662) joined #osdev
04:53:48 --- quit: banisterfiend (Quit: My MacBook has gone to sleep. ZZZzzz…)
04:56:24 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
04:57:34 --- quit: CheckDavid (Quit: Connection closed for inactivity)
04:58:44 --- join: baschdel (~baschdel@2a01:5c0:10:3d11:bca2:7797:3876:4668) joined #osdev
05:00:39 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
05:00:42 --- quit: cirno_ (Remote host closed the connection)
05:01:25 --- join: cirno_ (~cirno_@gateway/tor-sasl/cirno/x-25801483) joined #osdev
05:02:35 --- quit: banisterfiend (Client Quit)
05:11:49 --- join: vdamewood (~vdamewood@unaffiliated/vdamewood) joined #osdev
05:16:17 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
05:24:30 --- quit: zesterer (Quit: Leaving)
05:24:49 --- quit: banisterfiend (Quit: My MacBook has gone to sleep. ZZZzzz…)
05:27:09 --- join: NaNkeen (~nankeen@115.164.80.206) joined #osdev
05:28:12 --- quit: cirno_ (Quit: WeeChat 2.1)
05:33:20 --- join: CrystalMath (~coderain@reactos/developer/theflash) joined #osdev
05:36:08 --- quit: oaken-source (Ping timeout: 264 seconds)
05:42:14 --- join: vmlinuz (~vmlinuz@unaffiliated/vmlinuz) joined #osdev
05:59:30 --- quit: andrei-n (Quit: Leaving)
06:00:51 --- join: freakazoid0223 (~IceChat9@pool-108-52-244-197.phlapa.fios.verizon.net) joined #osdev
06:08:49 --- join: ACE_Recliner (~ACE_Recli@c-73-18-225-48.hsd1.mi.comcast.net) joined #osdev
06:09:19 --- quit: zeus1 (Ping timeout: 268 seconds)
06:17:14 <ALowther> It is my understanding that a standard consumer level operating system nowadays operates on some sort of, time-based interruption? As in, each process runs for a certain amount of time and the kernel loops through all running processes and determines how long each one will run for...On a consumer level, at least for me, this allows me to have many (10+) programs open at one time and they can all seemingly function concurrently, even though t
06:17:15 <ALowther> he CPU is really taking time executing instructions for each of them such that a program may not actually be getting any CPU time, it happens so quickly that is transparent to the end user....Assuming I have a decent grasp on the basics of how that works, it led me to another thought, say with blockchain. A lot of people are mining on likely any standard OS system. My thought/question was this, wouldn't all of that CPU time that is being all
06:17:15 <ALowther> ocated to background processes be wasted and lost CPU cycles that could've gone towards mining? Couldn't somebody write an OS specifically designed for mining so that the only processes running, and therefore taking CPU cycles, are processes essential for mining/maintaining system security? Or would the gains be soooo minimal that such an effort is wasted?
06:17:40 --- join: rolfino (~detlef@x590fed30.dyn.telefonica.de) joined #osdev
06:18:21 <mischief> .theo
06:18:21 <glenda> Unlikely.
06:21:00 <ALowther> I don't really care about writing an OS for mining, I am just more curious as to the concept presented. Are significant resources being wasted for programs/services that are run by default but aren't used by the end-user?
06:22:36 --- join: zeus1 (~zeus@197.239.36.183) joined #osdev
06:27:37 --- part: rolfino left #osdev
06:33:20 --- join: Shikadi (~Shikadi@cpe-98-10-34-205.rochester.res.rr.com) joined #osdev
06:59:41 --- join: dennis95 (~dennis@mue-88-130-61-097.dsl.tropolys.de) joined #osdev
07:21:33 --- join: andrei-n (~andrei@184.206-65-87.adsl-dyn.isp.belgacom.be) joined #osdev
07:22:10 --- quit: NaNkeen (Ping timeout: 265 seconds)
07:22:48 --- quit: zeus1 (Ping timeout: 256 seconds)
07:24:31 --- join: Barrett (~barrett@unaffiliated/barrett) joined #osdev
07:27:45 --- quit: exezin (Quit: WeeChat 1.4)
07:29:57 --- join: hmmmm (~sdfgsf@pool-72-79-162-68.sctnpa.east.verizon.net) joined #osdev
07:30:37 --- join: exezin (~exezin@unaffiliated/exezin) joined #osdev
07:40:02 --- quit: grouse (Quit: Leaving)
07:40:15 <Vercas> ALowther: There are no cryptocurrencies that can benefit from uninterrupted CPU time.
07:41:10 <Vercas> Usual proof-of-work requires raw computing power, and parallelizes very well. Even GPUs are obsolete here, ASICs are the norm.
07:42:41 <Vercas> The only algorithm I'm aware of that works well-ish on CPUs is CryptoNite, which needs L3 cache. This will leave you with insane amounts of unused computing power because the caches/memory are very slow...
07:51:14 --- join: NaNkeen (~nankeen@115.164.80.206) joined #osdev
07:55:49 --- quit: NaNkeen (Ping timeout: 240 seconds)
07:56:47 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
08:02:27 --- quit: aalm (Ping timeout: 240 seconds)
08:03:33 <peterbjornx> also, background tasks usually only get scheduled when they actually need to do something
08:03:44 --- quit: Guest92125 (Ping timeout: 264 seconds)
08:04:17 <peterbjornx> if there are no other processes wanting to run a process shouldn't actually be preempted
08:05:07 <peterbjornx> the timer interrupt will fire but if the run queue is empty besides the current task, what else would the OS do? waste cycles in the idle task?
08:06:38 <peterbjornx> so if you carefully pick what daemons you run at what priority a normal *nix kernel will achieve that goal just fine
08:13:52 --- join: daniele_athome (~daniele_a@5.170.125.26) joined #osdev
08:14:43 --- join: S_Gautam (3bb6f9ef@gateway/web/freenode/ip.59.182.249.239) joined #osdev
08:20:49 --- join: andrei-n_ (~andrei@184.206-65-87.adsl-dyn.isp.belgacom.be) joined #osdev
08:23:48 --- quit: andrei-n (Ping timeout: 276 seconds)
08:32:31 --- quit: m3nt4L (Remote host closed the connection)
08:33:11 --- join: BartAdv (uid90451@gateway/web/irccloud.com/x-acbcinklphhaefxt) joined #osdev
08:34:44 --- join: nightmared (~nightmare@unaffiliated/nightmared) joined #osdev
08:37:44 --- join: NaNkeen (~nankeen@115.164.80.206) joined #osdev
08:49:30 --- join: drakonis (~drakonis@unaffiliated/drakonis) joined #osdev
08:51:00 --- quit: S_Gautam (Quit: Page closed)
08:52:55 --- join: Asu (~sdelang@248.43.136.77.rev.sfr.net) joined #osdev
08:58:27 --- quit: vmlinuz (Ping timeout: 256 seconds)
09:00:15 --- join: dave24 (~rirc_v0.1@92.207.128.213) joined #osdev
09:01:51 --- quit: NaNkeen (Ping timeout: 260 seconds)
09:02:19 --- join: Asu` (~sdelang@AMarseille-658-1-27-148.w86-219.abo.wanadoo.fr) joined #osdev
09:06:15 --- quit: Asu (Ping timeout: 248 seconds)
09:07:02 --- join: unixpickle (~alex@c-24-5-86-101.hsd1.ca.comcast.net) joined #osdev
09:08:03 --- quit: variable (Quit: Found 1 in /dev/zero)
09:10:45 --- quit: awang (Remote host closed the connection)
09:13:28 --- join: vmlinuz (~vmlinuz@177.138.172.134) joined #osdev
09:13:28 --- quit: vmlinuz (Changing host)
09:13:28 --- join: vmlinuz (~vmlinuz@unaffiliated/vmlinuz) joined #osdev
09:14:20 <lkurusa> .theo
09:14:21 <glenda> That is not true.
09:14:43 <lkurusa> oh no, it broke
09:15:10 <lkurusa> or maybe it’s just the Riot.im client that’s being super slow
09:15:26 <lkurusa> I was looking at “The Lounge” IRC Client as an alternative potentially
09:17:59 --- quit: banisterfiend (Quit: My MacBook has gone to sleep. ZZZzzz…)
09:19:15 --- quit: unixpickle (Quit: My MacBook has gone to sleep. ZZZzzz…)
09:20:35 --- join: KernelBloomer (~SASLExter@gateway/tor-sasl/kernelbloomer) joined #osdev
09:20:58 --- join: zeus1 (~zeus@197.239.6.64) joined #osdev
09:23:36 --- quit: vdamewood (Quit: Textual IRC Client: www.textualapp.com)
09:39:41 --- join: S_Gautam (3bb6f1b0@gateway/web/freenode/ip.59.182.241.176) joined #osdev
09:46:16 --- join: aalm (~aalm@37-219-237-30.nat.bb.dnainternet.fi) joined #osdev
09:47:18 --- quit: nur (Quit: Leaving)
09:48:44 --- join: nur (~hussein@slb-97-111.tm.net.my) joined #osdev
10:00:11 --- quit: bcos_ (Ping timeout: 260 seconds)
10:03:10 --- quit: nur (Remote host closed the connection)
10:04:51 --- join: Asu (~sdelang@224.43.136.77.rev.sfr.net) joined #osdev
10:05:31 --- quit: Asu` (Ping timeout: 264 seconds)
10:05:44 --- join: xenos1984 (~xenos1984@22-164-191-90.dyn.estpak.ee) joined #osdev
10:07:39 --- join: OppressedAF (~user@194.135.153.231) joined #osdev
10:10:20 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
10:10:22 --- join: freakazoid0223_ (~IceChat9@pool-108-52-244-197.phlapa.fios.verizon.net) joined #osdev
10:10:57 --- quit: freakazoid0223 (Ping timeout: 240 seconds)
10:15:23 --- quit: daniele_athome (Read error: Connection reset by peer)
10:16:42 --- join: nur (~hussein@slb-97-111.tm.net.my) joined #osdev
10:19:09 --- quit: banisterfiend (Quit: My MacBook has gone to sleep. ZZZzzz…)
10:24:56 --- join: light2yellow (~l2y@185.220.70.172) joined #osdev
10:26:03 --- quit: celadon (Quit: ZNC 1.6.6+deb1 - http://znc.in)
10:30:34 --- quit: nur (Quit: Leaving)
10:30:52 --- join: drakonis_ (~drakonis@unaffiliated/drakonis) joined #osdev
10:30:58 --- quit: drakonis_ (Remote host closed the connection)
10:38:04 --- quit: ACE_Recliner (Read error: Connection reset by peer)
10:40:43 --- join: nur (~hussein@118.101.73.202) joined #osdev
10:45:47 --- join: oaken-source (~oaken-sou@p5DDB4D14.dip0.t-ipconnect.de) joined #osdev
10:48:24 --- join: bcos_ (~bcos@58.170.96.71) joined #osdev
10:53:43 --- join: m3nt4L (~asvos@2a02:587:a023:f000:3285:a9ff:fe8f:665d) joined #osdev
11:01:48 --- join: tacco| (~tacco@i59F52EE4.versanet.de) joined #osdev
11:25:20 --- join: climjark_ (~climjark@c-24-0-220-123.hsd1.nj.comcast.net) joined #osdev
11:25:23 <climjark_> hello all
11:27:59 --- join: daniele_athome (~daniele_a@5.170.124.73) joined #osdev
11:28:12 <climjark_> im still trying to wrap my head around physical and virtual memory management, i initialized paging by placing my page directory in a random spot (i know this is probably not advised)
11:28:49 <climjark_> i grabbed the memory map grub gave me, and now cycle thru each map and place the address and size in 2 arrays within my vmmu object
11:28:51 <climjark_> https://imgur.com/a/R75iYDT
11:28:52 <bslsk05> ​imgur.com: Imgur: The magic of the Internet
11:30:03 --- join: attah (~attah@h-155-4-135-114.NA.cust.bahnhof.se) joined #osdev
11:31:36 <climjark_> my question is what exactly should i do with that? like should i make a bitmap, set it to the address of one and create another variable that tracks the amount i use?
11:32:56 --- quit: daniele_athome (Ping timeout: 260 seconds)
11:34:51 --- join: JonRob (jon@gateway/vpn/privateinternetaccess/jonrob) joined #osdev
11:39:53 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
11:43:47 --- quit: gxt (Quit: WeeChat 2.0.1)
11:44:17 <lkurusa> The GRUB memory map will tell you (as far as i recall) the regions are physical memory that you can or cannot use
11:44:33 <lkurusa> you absolutely gonna need some datastructure to keep track of what pages of physical memory are free, are allocated, etc.
11:46:18 --- join: awang (~awang@rrcs-24-106-163-46.central.biz.rr.com) joined #osdev
11:48:23 --- quit: banisterfiend (Remote host closed the connection)
11:48:27 --- join: Ryanel (~Ryanel@104.220.91.246) joined #osdev
11:49:24 <chrisf> buddy bitmap isnt that hard to build, and works well enough for real systems
11:49:37 --- quit: BartAdv (Quit: Connection closed for inactivity)
11:49:50 --- quit: bemeurer (Ping timeout: 256 seconds)
11:52:37 --- quit: OppressedAF (Quit: WeeChat 2.1)
11:53:07 <Ryanel> I wonder if having two different page allocators for different systems is a good strategy. Like, having a stack allocator on memory constrained systems, and a buddy-bitmap on non-memory constrained systems.
11:55:09 --- join: Tazmain (~Tazmain@unaffiliated/tazmain) joined #osdev
12:00:07 --- join: ACE_Recliner (~ACE_Recli@c-73-18-225-48.hsd1.mi.comcast.net) joined #osdev
12:00:28 --- quit: zeus1 (Read error: Connection reset by peer)
12:02:05 --- quit: m3nt4L (Remote host closed the connection)
12:03:50 <climjark_> lkurusa, ok makes sense, and once paging is enabled, i should be using this memory to swap virtual memory in and out of processes, if im understanding correctly
12:06:05 --- join: _sfiguser (~sfigguser@130.108.214.77) joined #osdev
12:08:12 --- quit: KernelBloomer (Remote host closed the connection)
12:16:34 --- join: zeus1 (~zeus@197.239.5.5) joined #osdev
12:19:53 --- join: rafaeldelucena (~rafaeldel@2804:14d:ba83:2709:70e1:295:b411:293d) joined #osdev
12:20:35 --- join: t3hn3rd (~Logic_Bom@cpc91224-cmbg18-2-0-cust219.5-4.cable.virginm.net) joined #osdev
12:21:15 <t3hn3rd> Got Asuro booting on a bare-metal system, feelsgoodman.
12:21:41 <climjark_> nice!
12:22:40 <t3hn3rd> PCI enumeration is definitely not disabled due to it causing a triple-fault, no-sir-ee
12:24:17 <t3hn3rd> Serial output has definitely been a godsend.
12:25:09 <geist> yah i have been preaching the serial port gospel for years. it's really helpful when you're on an x86 and you triple fault and you lose whatever log you may have had
12:26:20 --- quit: oaken-source (Ping timeout: 276 seconds)
12:26:44 <t3hn3rd> It's been amazing, first triple fault was caused by PCI enumeration, second TF was not zeroing struct memory before use... Oops.
12:32:27 --- join: mardecanjon (~MrPooper@KD027092208157.ppp-bb.dion.ne.jp) joined #osdev
12:32:59 <t3hn3rd> formatting a real IDE Drive to Fat32 now :P
12:34:14 --- join: JusticeEX (~justiceex@pool-98-113-143-43.nycmny.fios.verizon.net) joined #osdev
12:37:27 --- quit: zeus1 (Ping timeout: 248 seconds)
12:42:45 <lkurusa> climjark_ (IRC): don’t worry about swapping just yet
12:42:56 <geist> yeah that's really far down the road
12:43:26 <mardecanjon> i dunno about everyone, but i have purpose in life
12:43:41 <mardecanjon> allthough being ripped out always from my own inventions
12:43:54 <geist> mardecanjon: remember the agreement?
12:44:17 <mardecanjon> in the end it is me because of who, general level raises, i have no agreement
12:44:47 <mardecanjon> there was a guy who mentioned my name and extorted my home address and pissed me off with his
12:45:07 <mardecanjon> kinda threats to invite me into court, but i am not afraid of that
12:45:46 --- mode: ChanServ set +o geist
12:45:54 --- mode: geist set +b *!*MrPooper@*.ppp-bb.dion.ne.jp
12:45:54 --- kick: mardecanjon was kicked by geist (sorry)
12:46:06 <S_Gautam> was it the same person?
12:46:24 <geist> yeah, clearly. though surprising they were coming from .jp this time
12:48:53 --- quit: S_Gautam (Quit: Page closed)
12:49:47 <geist> and now he's spewing nonsense at me in privmsg. pretty sad, wish they'd get some sort of help
12:51:00 <t3hn3rd> I really want to find an E1000 compatible card on eBay :/
12:52:45 <geist> oh there are tons of em
12:53:00 <geist> but then there's an entire family of those things. what interface are you looking for?
12:53:23 <geist> if you want something modern, i'd suggest looking for an i219, that's the modern consumer level pci-e chip, that's also e1000 compatible
12:53:36 <geist> it's actually a pretty darn good card too
12:56:24 <t3hn3rd> want something cheap to shove into my test bench machine - only Ethernet driver I have written is a PCI E1000 driver.
12:58:04 <t3hn3rd> Guessing most listing won't list the card as E1000 or I217
12:58:13 <t3hn3rd> even if it's compatible.
12:58:14 --- join: MarchHare (~Starwulf@mo-184-5-205-69.dhcp.embarqhsd.net) joined #osdev
12:58:38 <geist> what interface are you looking for?
12:58:41 <geist> pci? pcie?
12:59:36 <t3hn3rd> PCI
13:00:03 <geist> ah then you do want something older. you have a PCI based box you're testing on?
13:00:23 <geist> 82557 is i believe the fairly standard old e1000 nic
13:00:29 <geist> i honestly duinno where the 'e1000' name comes from
13:00:34 <geist> probably ethernet gigabit?
13:01:52 <t3hn3rd> Yeah, the barebones box is ancient, it's an old AMD Sempron CPU 200MHz/A7N8X-VM400 Mobo/512MB RAM
13:02:06 <geist> oh come on, give it some love! get a newer box
13:02:13 <t3hn3rd> I'm poor!
13:02:16 --- nick: Asu -> Asu`afk
13:02:18 <geist> alright
13:02:20 <t3hn3rd> :P
13:03:06 <lkurusa> wow that looks old
13:04:09 <t3hn3rd> Had to do a BIOS update using the AMI Flash Utility to even get the thing to boot from a USB
13:07:33 --- join: BrainFog (~androirc@cpc91224-cmbg18-2-0-cust219.5-4.cable.virginm.net) joined #osdev
13:07:36 <t3hn3rd> Hmm, can only find one listing for a 82557 PCI Card on eBay, and that's in the US :( Might have to look into other PCI Ethernet cards and try and write a driver for one.
13:07:40 <BrainFog> Hey
13:08:32 <t3hn3rd> Tempted to work out how to implement a Realtek RTL2832U driver at some point too.
13:08:46 <BrainFog> or just write the other drivers...
13:12:36 --- join: xerpi (~xerpi@149.red-83-45-192.dynamicip.rima-tde.net) joined #osdev
13:12:58 <t3hn3rd> or maybe an Ethernet over Serial driver
13:15:32 <clever> with one of my crazy ideas i had planned many years ago, i wanted an fpga to do ~70mbit data transfer over ethernet
13:15:57 <clever> but i also didnt want to bother with doing dhcp and avahi in verilog, lol
13:16:13 <clever> so my rough plan, was to create a mitm capable ethernet card in an fpga
13:16:26 <clever> the fpga would present a custom api over SPI, to a raspberry pi
13:16:35 <clever> and the rpi would have a custom ethernet driver to support that
13:16:52 <clever> and the fpga would then relay all packets to/from an ethernet controller
13:17:14 <clever> but, the fpga could also inject 70mbit of outgoing traffic claiming to be from the same ip&mac, without the rpi being aware of it
13:17:33 <clever> and some out-of-band control signals over the SPI would tell the fpga what ip to spoof
13:18:24 <t3hn3rd> That's.... Clever... Sorry, couldn't resist. Does sound quite fun though, albeit somewhat odd :P
13:18:45 <t3hn3rd> Was thinking of using a RPi for Eth/Serial
13:19:34 --- quit: baschdel (Ping timeout: 260 seconds)
13:19:43 <t3hn3rd> Just for fun, it's hardly going to be very functional. :P
13:20:01 <clever> yeah, thats almost the exact oposite of my plan, to avoid putting high-bandwdith data thru the rpi
13:20:24 --- quit: attah (Remote host closed the connection)
13:20:49 <t3hn3rd> Your implementation would most certainly be much more functional!
13:22:11 --- join: zeus1 (~zeus@197.239.5.5) joined #osdev
13:24:12 --- part: akasei left #osdev
13:24:23 --- quit: JusticeEX (Ping timeout: 248 seconds)
13:36:58 --- join: lf94 (~lf94@unaffiliated/lf94) joined #osdev
13:37:29 <lf94> are there any operating systems that are basically single process? :) I mean only one program can run at a time, and can do anything with the hardware.
13:38:28 --- quit: _ether_ (Remote host closed the connection)
13:38:55 <bcos_> lf94:UEFI (firmware)
13:38:59 --- join: JusticeEX (~justiceex@pool-98-113-143-43.nycmny.fios.verizon.net) joined #osdev
13:39:19 <bcos_> ..the thing that runs before an OS starts
13:40:46 <bcos_> (also old OSs like MS-DOS, from ancient times before people's expectations improved)
13:41:40 <lkurusa> If94 unikernels come into mind
13:42:24 <lf94> I'm just thinking, I can imagine myself being ok with doing some text editing, save&quit, run a compile program, run the resulting program, rinse and repeat
13:42:47 <lf94> but these programs can adjust cpu states and frequency
13:43:22 <lf94> What's the cost of switching between c-states?
13:43:27 <lf94> (in time)
13:43:57 <bcos_> It's about the same as the size of a piece of string (in length)
13:44:49 <bcos_> (depends on which c-states, which CPU models, which clock frequencies, ...)
13:45:01 <lf94> so small it wouldnt matter I take it
13:45:44 --- nick: Asu`afk -> Asu
13:45:59 <bcos_> An average person lives for about 70 years, and anything that takes less than one week could be considered small by comparison
13:46:36 <lf94> if you want we can discuss in terms of cpu cycles?
13:46:58 <lf94> Going from C2 to C0
13:47:10 <lf94> How many cycles does it take on...the latest i3
13:48:24 <bcos_> For 80486 I think it was essentially zero cycles - they just stopped the CPU's clock so the CPU was "frozen in flight"
13:49:30 <bcos_> For modern CPUs it's likely to be far more complex
13:50:17 --- quit: ACE_Recliner (Ping timeout: 265 seconds)
13:51:28 <bcos_> Hmm
13:51:38 <bcos_> lf94: This has some measurements: http://ena-hpc.org/2014/pdf/paper_06.pdf
13:52:16 --- join: ACE_Recliner (~ACE_Recli@c-73-18-225-48.hsd1.mi.comcast.net) joined #osdev
13:52:32 <Barrett> does anything like that qualify as an OS?
13:53:36 <bcos_> Barrett: I don't think there's an widely accepted definition of "OS"
13:55:26 <Barrett> sure
13:56:03 --- join: qweo (~rider@109-252-69-50.nat.spd-mgts.ru) joined #osdev
13:56:23 <Barrett> but running a single process without multitasking doesn't even need a true kernel
13:57:56 <bcos_> A full blown "multi-tasking, multi-user, with GUI" OS doesn't need a kernel either
13:58:19 --- quit: kasumi-owari (Ping timeout: 264 seconds)
13:59:13 <Barrett> depends on what you mean for kernel : p
13:59:29 --- join: kasumi-owari (~kasumi-ow@ftth-213-233-237-007.solcon.nl) joined #osdev
14:00:10 <Barrett> if that's the case, you can even run autocad on top of an abacus
14:00:16 <Barrett> does an abacus qualify as a computer?
14:01:23 --- join: pounce (~pounce@c-24-11-27-195.hsd1.ut.comcast.net) joined #osdev
14:01:38 <bcos_> E.g. you could have something like an exo-kernel; but instead of having a "kernel in user-space library" processes could just use the low-level interface directly
14:01:47 --- join: glauxosdever (~alex@ppp-94-66-41-183.home.otenet.gr) joined #osdev
14:02:00 <Barrett> I was thinking too about exokernels
14:02:44 <lkurusa> exokernels are kinda cool
14:03:35 <Barrett> but I think as long as there's no hardware abstraction, I don't think can be considered an OS
14:04:48 <lkurusa> it’s sad that friends at MIT have stopped exploring the idea of exokernels further
14:05:03 <lkurusa> I think there are a number of interesting research directions open
14:05:38 <Barrett> I think there's just no real need for those
14:05:56 <Barrett> little gain compared to an hypervisor or a vm on top of linux
14:06:54 <lkurusa> there is no real need now
14:07:19 --- quit: Halofreak1990 (Ping timeout: 264 seconds)
14:07:27 <Barrett> do you think with CPUs going to be faster and faster there will be need?
14:07:43 <bcos_> "need" is the wrong word - we have WSL on Windows and Wine on Linux; so it's a little obvious that the ability to have multiple "personalities" is beneficial for someone
14:08:19 <lkurusa> Barrett, I wouldn’t hold my breath for faster CPUs, but more CPUs
14:08:22 <pounce> /join #scriptkitties
14:08:24 <pounce> shit
14:08:29 <bcos_> ..it's just exo-kernel tends to be designed to do that efficiently, while WSL and Wine are more like a hack on top of something not designed for it
14:08:41 <Barrett> bcos_, wine is a reimplementation of an API on top of another kernel...that's a bit different
14:09:32 <bcos_> It's all just layers separated by interfaces/APIs
14:10:54 <bcos_> Hrm
14:11:17 <bcos_> Barrett: Imagine a "slider" that you can drag left/right to control how low level the kernel's API happens to be..
14:12:54 <bcos_> ..with the slider all the way to the right you get a big fat monolithic with a relatively high-level API, and with the slider all the way the the left you get that little piece of that is mapped into all address spaces when you enable KPTI
14:13:19 <Barrett> but my original point was that the trouble of making a kernel API that's enough general to run other kernels on top of it doesn't give enough gains to justify it
14:14:12 <Barrett> I mean a kernel that basically abstract hardware to make multiple kernels run at the same time
14:14:41 --- join: Shamar (~giacomote@unaffiliated/giacomotesio) joined #osdev
14:15:20 --- quit: dennis95 (Quit: Leaving)
14:15:43 <bcos_> I mean that there's N layers of stuff separated by interfaces (ranging from lower-level to higher level); where the word "kernel" is just an abritrary name for a randomly selected layer
14:17:41 <bcos_> If you want, you could probably get a description of Windows and replace "HAL" with "exo-kernel" and replace "kernel" with "system abstraction"; and say that Windows is an exo-kernel
14:17:58 --- quit: spare (Quit: leaving)
14:19:12 --- quit: xerpi (Quit: Leaving)
14:19:13 <lkurusa> Barrett: The gains were quite significant in one of the papers
14:19:24 <Barrett> the fact that everything is "by definition" is obvious
14:19:34 <Barrett> even positive and negative poles are just "by definition"
14:19:39 <lkurusa> the one with Kaashoek as first author
14:19:54 <lkurusa> “Application Flexiblity and Performance on Exokernels” was title or so
14:19:59 <lkurusa> That said, they did note that they API went through loads of iterations
14:20:04 <Barrett> but in the modern meaning the kernel is something very well defined
14:21:57 <Barrett> lkurusa, benchmarks are evil
14:22:03 --- join: immibis (~chatzilla@222-155-160-32-fibre.bb.spark.co.nz) joined #osdev
14:22:20 <lkurusa> Barrett, debatable
14:22:51 <Barrett> I'm pretty sure it'd be hard to make different kernels access the same resources efficiently
14:23:05 <Barrett> without a big redesign of those kernels
14:23:46 <Barrett> benchmarks makes sense when you have the whole thing running
14:24:05 <Barrett> with to say, win and linux concurrently on an exokernel
14:24:20 --- quit: tacco| ()
14:24:26 <Barrett> otherwise that's just theory
14:24:37 <lkurusa> the goal of the exokernel was not to run two operating system by side
14:24:55 <rain1> https://www.youtube.com/watch?v=kZRE7HIO3vk did you guys watch this yet?
14:24:55 <bslsk05> ​'The Thirty Million Line Problem' by Handmade Hero (01:48:55)
14:25:01 <lkurusa> it was to create abstractions for a specific application or a specific group of applications
14:26:21 --- quit: JusticeEX (Ping timeout: 268 seconds)
14:26:35 --- quit: josuedhg_ (Remote host closed the connection)
14:28:54 --- quit: vmlinuz (Quit: Leaving)
14:29:27 <Barrett> then tell them to don't call those LibOS
14:29:37 <Barrett> but I'm pretty sure that was the original purpose.
14:33:15 <Barrett> however, who know, futuristic CPUs may be sold with an exokernel
14:33:29 <Barrett> and no way to circumvent it.
14:33:57 <aalm> KoC
14:33:58 <Barrett> but would be really different than now?
14:34:22 <Barrett> take for example MMU...
14:34:50 <lkurusa> what’s so wrong about the “LibOS” naming?
14:35:25 <Barrett> that they call it library operating system
14:35:34 <Barrett> not just application that can access hardware
14:35:47 <Barrett> or whatever like that
14:36:01 <Barrett> the intention looks explicit to me
14:36:40 <lkurusa> the intention is that these “LibOS” implement behavior traditionally found in operating systems, hence the library OS naming
14:36:41 --- join: zesterer (~zesterer@cpc138506-newt42-2-0-cust207.19-3.cable.virginm.net) joined #osdev
14:36:57 <lkurusa> applications would then link with these and run themselves
14:37:15 --- quit: elevated (Quit: bye)
14:37:24 <lkurusa> I don’t see how this implies running two OSes at the same time at all
14:37:44 <lkurusa> (while theoretically possible)
14:38:02 --- quit: glauxosdever (Quit: leaving)
14:40:40 --- join: elevated (~elevated@unaffiliated/elevated) joined #osdev
14:43:54 <Barrett> this discussion is become boring.
14:44:13 <Barrett> I'm bored of discussing with sheldons.
14:45:08 --- quit: andrei-n_ (Quit: Leaving)
14:45:17 <Barrett> if you don't understand what I mean, well whatever.
14:46:33 <rain1> YOOOOOOOOOOOOO
14:46:35 <rain1> ANYBODY THERE
14:46:50 <lkurusa> lol
14:47:50 <newnix> no, no one has ever been anywhere ever
14:48:37 <geist> is there anybody out there
14:48:43 --- quit: drakonis (Read error: Connection reset by peer)
14:48:49 <Barrett> we just run on an exokernel implemented on top of a black hole.
14:49:06 <rain1> please talk about the thing im interested in
14:49:21 <Barrett> rain1, like wheather?
14:50:33 --- quit: immibis (Ping timeout: 276 seconds)
14:50:58 <lkurusa> .theo
14:50:58 <glenda> Such strong words.
14:51:56 --- quit: Asu (Quit: Konversation terminated!)
14:55:40 * zesterer mounts rootfs inside a subdirectory of rootfs
14:55:45 * zesterer runs 'tree'
14:56:01 * zesterer ponders over why the kernel triple faults
14:56:19 --- join: lachlan_s (uid265665@gateway/web/irccloud.com/x-sfugpxxxhehysixt) joined #osdev
14:58:56 <geist> yay infinite recursion in the lernel
14:59:05 <geist> that's usually why there's some sort of max path len
14:59:15 <geist> or max recursion of mounted fses
14:59:17 --- mode: geist set -o geist
15:00:00 <lachlan_s> Is it possible to learn this power?
15:00:38 <_mjg_> eh
15:00:46 <geist> hoomans are too weak
15:00:57 <_mjg_> geist: what's your symlink recursion depth before fuchsia gives up?
15:01:09 <_mjg_> i think linux had it around 10 or so
15:01:19 <_mjg_> it may have been upped
15:01:43 <_mjg_> recursion on a symlink happened to also recurse on code
15:06:20 <geist> dunno
15:06:30 <geist> i think we just dnot allow symlinks
15:08:18 <Brnocrist> hardlink olny?
15:08:48 <geist> no hardlinks as far as i know
15:09:03 <_mjg_> no symlinks
15:09:04 <_mjg_> radical
15:09:13 <geist> it's not posix mang
15:09:42 <geist> it's more like plan9 in taht regard: the view that any given process has for the fs space (if at all) is per process
15:10:19 --- quit: Love4Boobies (Quit: Leaving)
15:10:22 <geist> essentially each process has it's own mtab sort of, because of a set of fs handles that are given it
15:10:41 <geist> since the fs handles are open channels to different object namespace servers
15:14:57 --- quit: kimundi (Ping timeout: 240 seconds)
15:15:25 --- quit: t3hn3rd (Ping timeout: 255 seconds)
15:25:36 --- quit: zeus1 (Ping timeout: 256 seconds)
15:25:44 <newsham> put it on my recursion tab
15:28:27 --- quit: Shamar (Quit: Lost terminal)
15:35:35 --- quit: light2yellow (Quit: light2yellow)
15:39:06 --- join: zeus1 (~zeus@197.239.5.50) joined #osdev
15:41:23 --- quit: Tazmain (Quit: Leaving)
15:46:44 --- quit: xenos1984 (Quit: Leaving.)
15:54:15 --- quit: hchiramm_ (Ping timeout: 276 seconds)
15:56:30 --- join: quul (~weechat@unaffiliated/icetooth) joined #osdev
16:02:33 --- quit: `Guest00000 (Ping timeout: 268 seconds)
16:21:01 --- quit: Barrett (Quit: Capeesh? Na vota tu dissi, I told you many times this is my businissi.)
16:27:31 --- quit: zesterer (Ping timeout: 260 seconds)
16:29:47 --- quit: dbittman (Remote host closed the connection)
16:36:04 <jjuran> lf94: MPW is single-process: https://en.wikipedia.org/wiki/Macintosh_Programmer%27s_Workshop
16:36:05 <bslsk05> ​en.wikipedia.org: Macintosh Programmer's Workshop - Wikipedia
16:40:26 --- join: KernelBloomer (~SASLExter@gateway/tor-sasl/kernelbloomer) joined #osdev
16:42:14 --- join: bcos (~bcos@58.170.96.71) joined #osdev
16:45:43 --- quit: bcos_ (Ping timeout: 264 seconds)
16:47:03 --- quit: quc (Remote host closed the connection)
16:48:35 --- join: rirc_F21E (~rirc_v0.1@92.207.128.213) joined #osdev
16:48:51 --- quit: dave24 (Ping timeout: 276 seconds)
17:03:39 --- quit: ACE_Recliner (Remote host closed the connection)
17:05:08 --- nick: EricB -> BronzeBeard
17:12:18 <spencerb> anyone happen to know which EFI_MEMORY_TYPE the Page Map Tables would be in? I'm somewhat new to osdev but it seems I definitely don't want to overwrite that in the process of loading my kernel into memory.
17:13:12 <geist> you generally want to almost immediately swap with your own page tables
17:13:29 <geist> but you can read it out of cr3 and check
17:16:51 <spencerb> thanks. I was thinking I should setup my paging in the kernel, but it may be simpler to do it earlier.
17:17:32 <clever> does the efi API have something like malloc?
17:18:11 <clever> my first guess is to just ask the EFI to allocate enough ram for your kernel and your temporary heap, then put your own paging tables into that heap, and swap over, then go nuts reusing ram
17:20:30 --- join: JusticeEX (~justiceex@pool-98-113-143-43.nycmny.fios.verizon.net) joined #osdev
17:23:21 <spencerb> ah, that makes so much sense. It wouldn't overwrite anything important then. yeah, it does indeed have some allocation methods.
17:25:28 <clever> and i'm guessing you would wait until you call the special exitEfiServices thing, wait till you have that heap allocated at least
17:26:42 <spencerb> yeah, otherwise it'd shout at me.
17:27:17 <clever> i'm guessing that the malloc routine likely wont work after that
17:30:02 <spencerb> yeah, doing any efi memory function stuff after exitbootservices call gives a page fault.
17:32:38 <klange> fuck, github changed their front page again
17:33:08 <klange> why are they showing me comments I made on my own tickets two weeks ago as if that is relevant activity
17:39:21 --- join: vaibhav (~vnagare@125.16.97.123) joined #osdev
17:43:40 <lachlan_s> klange: yeah, I just saw it
17:43:49 <lachlan_s> I kinda liked the old one
17:45:28 <klange> I just don't like websites that routinely make drastic design changes.
17:46:39 <klange> Subtle, slow changes are easier to adjust to. Making considerable design changes every few months means I have to re-learn your site over and over.
17:46:56 <latentprion> Old heads
17:47:06 <klange> It's one thing to make a big redesign after years (though, don't fuck that up like reddit, make sure your redesign is actually better)...
17:47:23 <latentprion> How did you feel when TV became colour
17:47:34 <klange> I was not alive.
17:47:37 --- quit: pie_ (Remote host closed the connection)
17:47:45 --- join: pie_ (~pie_@unaffiliated/pie-/x-0787662) joined #osdev
17:47:51 <klange> Color television was deployed in the 1960s.
17:48:06 <latentprion> How did you feel when text smiley faces became emojis
17:48:22 <klange> I think you are confusing something here.
17:48:41 <klange> I am talking about a website that I use regularly changing its layout many times.
17:48:57 <klange> In ways which serve no obvious purpose.
17:49:10 <latentprion> I'm poking fun at you; I'm not being serious
17:49:17 <klange> You're not poking fun at me successfully.
17:49:29 <klange> Your examples are entirely different, incomparable situations
17:49:44 <latentprion> That's how you know I'm poking fun at you lol
17:51:07 <klange> Color TV was a strict improvement in functionality introduced through a decade of concerted effort and much publicity. Emoji wasn't a sudden change, it was a slow development over time as the characters were added to Unicode and supported by more devices.
17:51:25 <klange> And those are changes in technologies, not changes in layout.
17:51:40 <klange> A TV example is if a network you watched kept changing their schedules.
17:51:51 <klange> Forcing you to reprogram your VCR every three months.
17:52:19 --- quit: Naergon (Ping timeout: 240 seconds)
17:59:01 <geist> yah i'm not entirely convinced that emojis are an improvement
17:59:12 <geist> though i guess objectively they are
18:04:30 <klange> While emoji themselves may not be a considerable improvement, it's important to remember what they've done for other parts of the technology stack.
18:05:20 <klange> They've driven Unicode support forward in all sorts of places, including ones that were stuck in UCS-2. They've also led to the development of new font technologies for colored glyphs.
18:05:37 <geist> yah
18:08:19 --- join: bemeurer (~bemeurer@185.236.200.248) joined #osdev
18:10:46 --- quit: bork (Ping timeout: 260 seconds)
18:11:35 <climjark_> hello all
18:12:44 --- join: bork (ayy@cpe-76-173-133-37.hawaii.res.rr.com) joined #osdev
18:16:13 <klange> hi
18:17:45 --- quit: X-Scale (Ping timeout: 256 seconds)
18:19:17 <lkurusa> ‘ello
18:20:21 --- join: drakonis (~drakonis@unaffiliated/drakonis) joined #osdev
18:21:54 --- join: drakonis_ (~drakonis@unaffiliated/drakonis) joined #osdev
18:22:26 <climjark_> rain1 you still here?
18:23:10 <rain1> yep
18:23:40 <climjark_> Casey is stating that 17 million lines of code need to be run from linux to get get a web page
18:24:14 --- quit: drakonis_ (Client Quit)
18:24:31 <climjark_> thats not true, due to the modularity of the kernel, not all 17 million lines of code get executed. Thats accounting for all architectures, when the kernel was only built for 1
18:25:02 <rain1> ok
18:25:06 <climjark_> same with FreeBSD, im sure that 13 million lines of code he quotes, that would probably be all the architecture immplementations
18:25:19 <_mjg_> where are this figures coming from?
18:25:25 <_mjg_> these
18:25:26 <rain1> wc
18:25:30 <climjark_> yea
18:25:32 <climjark_> probably
18:25:47 <drakonis> where's this coming from?
18:25:48 <_mjg_> i mean it is rather clear how the guy got the results
18:25:50 <lkurusa> who is this Casey?
18:25:52 <_mjg_> i'm asking where is this
18:25:53 <climjark_> ehhhh , i mean probably more accurate ways to do that
18:25:59 <climjark_> https://www.youtube.com/watch?v=kZRE7HIO3vk
18:25:59 <_mjg_> is htis a talk or something?
18:26:00 <bslsk05> ​'The Thirty Million Line Problem' by Handmade Hero (01:48:55)
18:26:08 <climjark_> rain1 posted this a while ago
18:26:11 <climjark_> ive been listening to it lol
18:26:27 <drakonis> does he explicitly name any systems?
18:26:51 <climjark_> He goes over an example of "How many lines of code it would take for you to get a webpage from a browser"
18:27:10 <klange> uh, those 13 million aren't from the kernel
18:27:12 <_mjg_> lul
18:27:13 <klange> they're from the browser ;)
18:27:17 <_mjg_> ok
18:27:20 <rain1> haha
18:27:22 <climjark_> including each api it deals with, The kernel, network, apache, mysql, chrome , freebsd, linux
18:27:25 <_mjg_> so he basically wc-ed several projects
18:27:29 <drakonis> hmm, i don't think i can sit up and watch almost 2 hours
18:27:32 <_mjg_> and just summed it all up
18:27:32 <drakonis> lol
18:27:34 <rain1> yeah _mjg_ obviously its a just a rough estimate
18:27:40 <rain1> it doesn't need to be exact
18:27:45 <_mjg_> it is not even a rough estimate
18:27:46 <rain1> the point is just
18:27:47 <climjark_> 15 mins in
18:27:51 <rain1> there's a lot
18:28:02 <drakonis> lol
18:28:12 <lkurusa> this seems like a video that’s likely not worth my time
18:28:15 <drakonis> it isn't
18:28:17 <_mjg_> this remimnds me of unikernel talks
18:28:20 <climjark_> yea but its not like your loading all the microcode into one single processor lol
18:28:39 <drakonis> freebsd has 13 million lines of code
18:28:41 <drakonis> say whaaaat
18:28:41 <_mjg_> where they claim linux is so fucking big LOC-wise
18:28:42 <lkurusa> deferpanic.com/
18:28:48 <_mjg_> while ignoring the fact they include all the drivers
18:28:53 <_mjg_> which they don't provide in unikernels
18:29:06 <rain1> linux is one of the few things that's actually ok
18:29:14 <drakonis> isn't openwrt a linux distro?
18:29:19 <lkurusa> check out this gem:
18:29:22 <klange> yes
18:29:24 <lkurusa> http://deferpanic.com/
18:29:25 <bslsk05> ​deferpanic.com: Defer Panic
18:29:26 <drakonis> lol
18:29:31 <drakonis> why include a distro there
18:29:36 <lkurusa> unikernels are the future !
18:29:39 <lkurusa> /s
18:29:44 <drakonis> unless it has code not on the main tree
18:29:51 <_mjg_> where is he taking 13 mln for freebsd though?
18:29:57 <_mjg_> i just checked the kernel itself, it is 10.3 mln
18:30:05 <rain1> you're getting really hung up on the exact numbers _mjg_
18:30:06 <drakonis> lol at kernel size though
18:30:16 <rain1> the point is just that there's a lot of code
18:30:22 <drakonis> i hear some freebsd community people taking potshots at linux for being huge
18:30:25 <_mjg_> rain1: i'm just curious about this bit
18:30:37 <rain1> well i assume he counts the libc and userspace, I don't really know
18:30:37 <_mjg_> the entire repo is 44.5 mln
18:30:47 --- join: sixand (~Thunderbi@219.145.113.142) joined #osdev
18:30:51 <drakonis> it pulls in the llvm source doesn't it?
18:30:54 <_mjg_> yea
18:30:56 <drakonis> rip
18:30:57 <chrisf> all that supported hardware
18:31:04 <chrisf> _huge_
18:31:06 <_mjg_> i don't know any bsd people taking stabs at linux for this though
18:31:24 <drakonis> its the people surrounding the developer community that likes to take stabs at linux for no reason
18:31:34 <drakonis> also if you see feld, tell him that you can chroot into namespaces
18:31:41 <klange> i can get you a whole OS in 50k lines if you're so worried about code size
18:31:48 <_mjg_> :>
18:31:48 <drakonis> its some comment from a few months ago that he got something wrong about linux
18:31:55 <drakonis> i ain't worried about code size
18:31:58 <drakonis> that's the openbsd people
18:32:18 <_mjg_> if talking about code sizes, there was a very cooll talk by rusty russel
18:32:28 <_mjg_> how /bin/echo & friends changed in size over the years
18:32:30 <climjark_> im just trying to say that those numbers are widely bloated lol
18:32:39 <drakonis> instead of code sizes, can we talk about code aesthetics
18:32:52 <drakonis> i find it cool when people draw cool looking art with code
18:32:55 <_mjg_> i would be itnerested to see /instruction counts/ though
18:33:05 <_mjg_> just to get a sense of scale here
18:33:06 <klange> all programs should be obfuscated code art
18:33:08 <chrisf> drakonis: you've read gnu coreutils, right?
18:33:22 <drakonis> obfuscated code is art
18:33:29 <chrisf> drakonis: aethetics there are.. twisted.
18:33:36 <klange> if cat'ing your source file doesn't produce a pretty picture in my terminal, what are you even doing?
18:33:36 --- quit: sixand (Remote host closed the connection)
18:33:45 <drakonis> ^^^^^^^
18:33:58 --- join: sixand (~Thunderbi@219.145.113.142) joined #osdev
18:34:41 <_mjg_> if you write an OS and did not put a screenshot on osdev, did you even do it?
18:34:47 --- join: NaNkeen (~nankeen@122.0.31.67) joined #osdev
18:35:00 <drakonis> schroedinger's OS
18:35:25 <_mjg_> A historical argument for creating a stable instruction set architecture (ISA) for entire system-on-a-chip (SoC) packages.
18:35:26 <climjark_> lmao klange, so like, cat cat.c should look like a cat, amirite?
18:35:31 <klange> if you post screenshots of an amazing OS and don't have a release ISO or a codebase does your OS really exist?
18:35:37 <klange> climjark_: definitely
18:35:52 <drakonis> _mjg_, bby its called risc-v
18:35:59 <drakonis> also bby its called arm
18:36:01 <drakonis> or mips
18:36:01 <_mjg_> the video is way too long for me to try watching
18:36:04 --- quit: elevated (Quit: bye)
18:36:04 <_mjg_> given the loc's posted
18:36:06 <lkurusa> this reminds me of that black hat presentation where the guy created art inside the disassembler as a form of obfuscation
18:36:09 <_mjg_> it looks iffy
18:36:13 <drakonis> this guy made a game
18:36:14 <klange> do any of the risc-v platforms have framebuffers yet
18:36:27 <drakonis> probably not yet
18:36:29 <_mjg_> btw, fbsd has a riscv port
18:36:37 <drakonis> i've heard
18:36:44 <_mjg_> does not build
18:36:56 <drakonis> i got in an argument over whether having a risc-v port first is more valuable than a working port for a major platform
18:37:00 <klange> i was thinking of doing a riscv port, but my OS is all about dat gui, and without graphics hardware to shit out to, why bother
18:37:07 <drakonis> i wasn't the person saying that doing it first therefore makes you better
18:37:27 <drakonis> a working risc-v port
18:37:28 <drakonis> see linux
18:37:35 <Mutabah> klange: Stream full-frame uncompressed video? :)
18:37:45 <drakonis> it actually builds and the folks with the risc-v board are advertising it as the first linux board
18:37:51 <Mutabah> Actually, maybe stream screen rect updates?
18:37:58 <Mutabah> That sounds tempting...
18:38:08 * Mutabah makes mental TODO to make a VNC clone
18:38:14 <drakonis> a VNC clone? hmmm
18:38:33 <lkurusa> i have a simple PoC written for RISC-V that boots
18:38:42 <klange> I could implement a VNC server and just run against a dumb framebuffer
18:38:44 <lkurusa> it’s for that $69 board from the sifive website
18:38:53 <drakonis> the SiFive board doesn't have a gpu yet
18:38:57 <lkurusa> risc-v is pretty neat
18:39:14 <klange> I was thinking about implementing a VNC server for my UI anyway...
18:39:34 <_mjg_> https://www.youtube.com/watch?v=Nbv9L-WIu0s
18:39:35 <bslsk05> ​'Bloat: How and Why UNIX Grew Up (and Out) - Rusty Russell,Matt Evans' by Linux.conf.au 2012 -- Ballarat, Australia (00:42:38)
18:39:41 <lachlan_s> I'm definitely gonna port nebulet to riscv once cretonne supports it
18:40:09 <drakonis> i feel like its really easy to take stabs at fbsd
18:40:44 <_mjg_> it is trivial
18:41:00 <_mjg_> most stabs i have seen are somewhat misguided though
18:41:30 <_mjg_> in particular "security"-"oriented" people seem to misrepresent the state
18:41:35 <_mjg_> A LOT
18:41:38 <drakonis> ah yes
18:42:23 <drakonis> i do have some technical state of the union stabs
18:42:33 <Mutabah> lachlan_s: How is cretonne going?
18:43:07 <lachlan_s> As far as I can tell, it's going well! I'm pretty hands off, it's written by dan gohman
18:43:11 <_mjg_> short version is, the system *lacks* several basic mitigations (e.g. ASLR), but there are patches for it
18:43:35 <drakonis> the guy taking stabs gets invited to dev summits
18:43:41 <drakonis> its weird
18:43:57 <_mjg_> the system *does* provide other security features like the mac framework (selinux-y for the sake argument), capsicum (better than syscall filtering) and so on
18:44:28 <_mjg_> however, the most egregious claim i have seen is that bsd kernels n general are secure, while linux is dogshit
18:44:35 <drakonis> lol @ this claim
18:44:38 <_mjg_> ye
18:44:52 <drakonis> linux has orders of magnitude more people poking holes at it and finding out bugs
18:44:53 <_mjg_> in reality linux was at least fairly well fuzzed single threaded and recently even multi-threaded
18:45:05 <_mjg_> whereas you can crash any bsd no problem with a non-retarded fuzzer
18:45:11 <drakonis> lol
18:45:17 <_mjg_> and that's bugs which find themselves, so to speak
18:45:21 <_mjg_> let alone actual audit
18:45:49 <drakonis> i heard from another user going through the code that you can RCE it using distributed compilation
18:45:53 <renopt> might have been true looong ago when BSDs were actually popular
18:45:56 <drakonis> this seems like a really bad thing
18:46:02 <_mjg_> ?
18:46:11 <drakonis> ccache i think?
18:46:25 <_mjg_> i don't see how this is a vulnerabiblity
18:46:37 <drakonis> remote code execution when compiling code lol
18:46:57 <_mjg_> which way
18:47:12 <_mjg_> on the target nodes which do compilation
18:47:14 <rain1> linux seems very secure to me
18:47:16 <_mjg_> or the 'master' node
18:47:23 <_mjg_> either way, does not seem like a vuln
18:47:23 <drakonis> now, i don't know the answer to that
18:47:31 <_mjg_> this is commands flying left and right
18:47:32 <drakonis> i have been told it can happen
18:47:38 <_mjg_> and you trust generated .o files
18:47:39 --- join: Pseudonym73 (~Pseudonym@e-nat-unimelb-128-250-0-40.uniaccess.unimelb.edu.au) joined #osdev
18:47:40 <lachlan_s> It seems so strange that the kernels that we use everyday have so many bugs and vulnerabilities
18:47:49 <drakonis> its because we're human
18:47:52 <_mjg_> it basically makes no fucking sense to claim a vuln here
18:47:53 <Mutabah> They're complex systems
18:47:54 <drakonis> humans are terrible at code
18:48:03 <klange> it's almost like writing a massive complex system that actually supports the hardware and software use is hard
18:48:07 <_mjg_> similar to claiming that make has arbitrary code exec
18:48:09 <_mjg_> check this out:
18:48:10 <_mjg_> lol:
18:48:12 <drakonis> i didn't claim it is a vulnerability but i did say it is weird lol
18:48:12 <_mjg_> echo what now fucker
18:48:14 <_mjg_> make lol
18:48:22 <drakonis> wait what now
18:48:24 <drakonis> oh
18:48:30 <drakonis> this is basically the equivalent of using goto
18:48:39 <drakonis> print fart goto 00
18:48:56 <drakonis> c'mon you can try harder
18:48:57 <_mjg_> so really, the claim is of a rather puzzling nature
18:49:10 <_mjg_> no matter which way the supposed execution is to take place
18:49:13 <drakonis> okay sure, its a second hand retell
18:49:27 <drakonis> i can't tell you the full truth here because i don't remember it for certain
18:49:31 <_mjg_> afair ccache logs in over ssh
18:49:34 <_mjg_> and just spawns the compiler
18:49:39 <klange> I would say x86 CPUs have arbitrary code execution, but with all the bugs I'm not so sure.
18:49:41 <_mjg_> so there is already remode code exec
18:49:44 <klange> ;)
18:49:50 <drakonis> by the way, i have a barb against becoming reliant on zfs lol
18:49:54 <clever> ccache doesnt distribute at all
18:50:02 <clever> ccache is purely caching the result of cc in a local directory
18:50:10 <_mjg_> oh shit
18:50:12 <drakonis> nobody's touching the things zfs is bolted on because its the main selling point today
18:50:13 <_mjg_> i confused it with distcc
18:50:19 <clever> distcc is the one that distributes :P, and i remember it using its own dedicated port
18:50:20 <_mjg_> this rce made me do it
18:50:27 <drakonis> its distcc not ccache bby
18:50:35 <drakonis> ccache is a different thing
18:50:36 <_mjg_> RCE with ccache would doubly make no sense
18:51:12 <drakonis> fbsd needs them namespacing
18:51:15 <_mjg_> i did parse it as distcc though :>
18:51:25 <_mjg_> drakonis: it has it both better and worse
18:51:28 <drakonis> i did type it incorrectly, my bad
18:51:34 <_mjg_> user namespaces are basically a mistake imo
18:51:41 <drakonis> a mistake
18:51:41 <drakonis> hmm
18:51:54 <_mjg_> jails /in concept/ are allright for me
18:51:57 <drakonis> it did help propel linux into becoming a major thing
18:51:58 <_mjg_> but they also have stupid shit inside
18:52:03 <drakonis> jails as done by freebsd seems really stupid
18:52:08 <_mjg_> like nested jailing, which runs into all the user namespace issues
18:52:09 <drakonis> why do i have to use the entire suite of restrictions
18:52:14 <_mjg_> ?
18:52:19 <_mjg_> in what way
18:52:47 <drakonis> why can't i use just the namespace part?
18:53:02 <drakonis> i'm required to restrict what's inside by default
18:53:05 <_mjg_> there is no namespace part
18:53:09 <drakonis> there is?
18:53:10 <klange> we should go back to the days when computers came with a BASIC intepreter and no OS
18:53:12 <drakonis> isn't
18:53:13 <_mjg_> i don't see much benefit of adding it
18:53:21 <drakonis> granularity
18:53:22 <drakonis> really
18:53:28 <_mjg_> granularity of what
18:53:43 <drakonis> linux has some really dumb shit you can do with it
18:53:47 <drakonis> like mounting files
18:53:56 <_mjg_> mounting files is not related to namespaces
18:53:59 <klange> why is that dumb
18:54:01 <drakonis> then i'm confused
18:54:09 <drakonis> its for effect
18:54:22 <drakonis> its not dumb at all, its just me being bad with words
18:54:24 <_mjg_> there is a mount namespace which allows you to mount shit from your perspective
18:54:28 <_mjg_> and not affect anyone else
18:54:47 <drakonis> okay, i want to mount a namespace inside another one, a folder or a file specifically
18:54:49 <_mjg_> maybe this has reaonable use, i don't know
18:54:50 <drakonis> mirror that across every jail
18:55:00 <_mjg_> you can --bind or -t nullfs
18:55:04 <drakonis> so i don't have to maintain that individually every time i want to do jails
18:55:06 <_mjg_> this is not a namespace thing
18:55:10 <drakonis> yes linux can bind that
18:55:16 <_mjg_> and you can -t nullfs on freebsd
18:55:18 <drakonis> that's a vfs thing isn't it
18:55:18 <_mjg_> equivalent
18:55:35 <drakonis> then i'm not sure how freebsd didn't actually catch on the container mania?
18:55:44 <drakonis> if it has the things needed
18:56:09 <_mjg_> first off, it had ost of the functionality around 2002 or so. linux with openvz around 2005 or so
18:56:13 <_mjg_> production quality i mena
18:56:16 <_mjg_> and that did not catch on either
18:56:30 <_mjg_> what is possibly of value which linux does and fbsd does not is unionfs
18:56:44 <_mjg_> they call it overlayfs i think
18:57:07 <Drakonis[m]> overlayfs is the latest revision
18:57:18 <_mjg_> that said, all the important container stuff was there in both projects for years
18:57:26 --- quit: sixand (Ping timeout: 260 seconds)
18:57:47 <Drakonis[m]> tooling then?
18:57:51 <_mjg_> probably
18:58:13 <_mjg_> back when i was a sysadmin there were no public orchestration tools and whatnot
18:58:20 <_mjg_> everyone had their own wtf scripts
18:58:36 <_mjg_> and most people have no idea what they are doing, so there is that too
18:58:36 --- quit: zeus1 (Ping timeout: 260 seconds)
18:58:51 --- join: sixand (~Thunderbi@219.145.113.142) joined #osdev
18:59:12 <_mjg_> that said, the concept of putting everything in a dir and having it logically separated is old
18:59:28 <_mjg_> and was production ready on linux as well last decade
18:59:36 <_mjg_> but it was not being pushed by big players like red hat
19:01:37 <_mjg_> s/players/vendors/
19:01:44 <Drakonis[m]> rh was busy with kvm then
19:02:00 <_mjg_> note last decade was still an ear where people were running mod_php
19:02:06 --- quit: NaNkeen (Ping timeout: 260 seconds)
19:02:14 <_mjg_> and the standard webdev advice was to chmod -R 777 your shi
19:02:15 <_mjg_> t
19:02:18 <Drakonis[m]> playing the virt game
19:02:44 <_mjg_> where all the running php scripts of all customers on the given box were running with the same uid
19:02:53 <_mjg_> it's basically dark ages
19:03:36 <Drakonis[m]> you'll notice that there's people that yearn for a return to the dark ages
19:04:11 <_mjg_> there is a lot of flaming about systemd if that's what you mean
19:04:16 <_mjg_> which is rather ortoghonal
19:04:40 <Drakonis[m]> not systemd lol
19:04:41 <_mjg_> it may be there are flames about DOCKER, but i perhaps sit in the wrong spot to see them
19:04:51 <_mjg_> other than that i don't know what you mean
19:04:51 <Drakonis[m]> but sure
19:04:53 <Drakonis[m]> docker also gets flamed
19:05:14 --- quit: sixand (Remote host closed the connection)
19:05:18 <Drakonis[m]> i mean people chrooting and manually executing their shell scripts one by one
19:05:35 --- join: sixand (~Thunderbi@219.145.113.142) joined #osdev
19:05:36 <Drakonis[m]> to deploy software
19:06:17 <Drakonis[m]> to an era before automation
19:12:14 --- join: sixand1 (~Thunderbi@219.145.113.142) joined #osdev
19:12:30 <klange> Computers were a mistake.
19:12:49 --- quit: sixand (Ping timeout: 240 seconds)
19:12:50 --- nick: sixand1 -> sixand
19:13:15 <geist> word.
19:14:35 <drakonis> :agreed:
19:14:54 <drakonis> however they did give us nethack
19:16:13 <drakonis> so i might have to take it back
19:21:03 --- quit: epony (Quit: QUIT)
19:21:35 <graphitemaster> and lots of porn
19:21:46 <graphitemaster> basically, technology only exists because of porn
19:22:01 <Mutabah> Why you think the 'net was born.
19:22:14 <klange> theeeeee internet is for porn
19:22:15 <drakonis> so people could stay stupid shit to each other through the wire?
19:22:30 <klange> so grab your **** and double click for porn, porn, porn!
19:22:59 * Mutabah gives klange a cookie
19:23:38 <graphitemaster> porn has the best geolocation ip code, every porn site knows my locaiton down to the last mile, google can't even get the right city. porn has the best video players, youtube is actually broken ....
19:23:55 <graphitemaster> this is the world we live in
19:24:07 <graphitemaster> where not even google can beat pornhub at web technologies
19:24:15 <graphitemaster> and google is the largest web company on earth
19:24:56 <_mjg_> the counters of strangers in your area is fake
19:25:07 <drakonis> stranger danger
19:25:24 <graphitemaster> google has advanced ai that can drive cars but the very technoogy they use to ensure I'm not a robot is if I can determine which pictures are street signs!
19:25:36 <klange> that's...
19:25:39 <klange> THAT'S WHAT THAT IS FOR
19:25:57 <drakonis> facebook has so much money, but it can't make an application that won't spam notifications about how "LOOK AT ALL THESE PEOPLE SWIPING RIGHT" when i don't have an account
19:26:12 <graphitemaster> I select the ones not streetsigns because I want to cause accidents
19:26:16 <klange> "The government can spend billions of dollars on the military, but they have to take *my* money in taxes!"
19:26:19 <graphitemaster> I wonder if that is illegal
19:26:31 <_mjg_> google recently gave me an ad: would you like to get a phd in philosophy
19:26:36 <_mjg_> i kind of declined though
19:26:44 <klange> Ooh, sure.
19:26:49 <chrisf> graphitemaster: not illegal to be an asshat
19:27:00 <klange> I'd love to get my doctorate in something stupid.
19:27:09 <chrisf> _mjg_: just kind of
19:27:14 <graphitemaster> chrisf, that's good news
19:28:13 <graphitemaster> the counter of strangers in my area may be fake, but the area they put in plain text is correct
19:28:36 <graphitemaster> meanwhile google thinks i'm in a city nearly 300km away
19:30:13 <_mjg_> klange: it does sounds fun for a little bit, but then it turns out it is an actual time committment
19:30:17 <_mjg_> non-trivial one too
19:30:22 <_mjg_> so i'm going to pass
19:30:42 <_mjg_> besides, for the sake of argument let's say i got one
19:31:01 <graphitemaster> aren't all phds useless in this job market?
19:31:03 <_mjg_> any person who learns about it going to laugh and rigthfully so
19:31:16 --- quit: sixand (Ping timeout: 260 seconds)
19:31:22 <graphitemaster> so if you already have a phds, then you already have a useless one
19:31:30 <graphitemaster> s/phds/phd/
19:31:32 <graphitemaster> </asshat>
19:31:33 <_mjg_> i don't think that's true
19:32:45 <drakonis> lol
19:32:49 <drakonis> a phd in asshat you mean
19:33:11 <doug16k> graphitemaster, you still want xcpumemperf results? I got a 2nd gen ryzen, 2700X
19:33:13 <graphitemaster> xkcd has a phd in undecided
19:33:29 <graphitemaster> er, undeclared
19:33:33 <graphitemaster> https://xkcd.com/1052/
19:33:34 <bslsk05> ​xkcd - Every Major's Terrible
19:34:20 <graphitemaster> doug16k, sure, if you add it to the results and result.png as a PR
19:34:29 <graphitemaster> don't have the code here to redo all that
19:35:06 <_mjg_> i have to say i met some suspicous phds
19:35:19 <_mjg_> but most i ran into were smart people /clued/ in their area
19:35:43 <_mjg_> and i'm told big corps like G look at the degree with a friendly eye
19:36:18 <_mjg_> so i doubt it is useless
19:38:01 <drakonis> the degree is only useful for making you use your brain
19:38:51 <_mjg_> see the remark about hiring. i don't have any hard data and i suspect you don't either
19:39:28 --- join: NaNkeen (~nankeen@203.188.234.181) joined #osdev
19:39:35 <drakonis> now, i don't either
19:41:23 <graphitemaster> well compared to the old days, a degree, especially a phd was a gurantee for a job
19:42:49 --- join: zeus1 (~zeus@197.239.5.50) joined #osdev
19:44:49 --- quit: NaNkeen (Ping timeout: 240 seconds)
19:48:07 --- quit: drakonis (Remote host closed the connection)
19:55:02 --- join: Arcaelyx (~Arcaelyx@2604:2000:f14a:2500:e81c:4527:d51d:16fc) joined #osdev
19:59:49 --- join: `Guest00000 (~user@37.113.180.34) joined #osdev
20:01:19 --- quit: JusticeEX (Ping timeout: 240 seconds)
20:06:18 <klange> anybody want a general IT/network admin job in Tokyo?
20:06:51 --- quit: nj0rd (Ping timeout: 260 seconds)
20:06:58 <lkurusa> O_O
20:07:24 <klange> just askin'
20:08:17 <Drakonis[m]> japan? land of animes and suicidal salarymen?
20:09:31 <klange> i assure you our company does not drive people to suicide
20:10:46 <graphitemaster> yet
20:18:25 --- join: Goplat (~Goplat@reactos/developer/Goplat) joined #osdev
20:21:31 --- join: nj0rd (~nj0rd@i577BC0C9.versanet.de) joined #osdev
20:23:37 --- join: unixpickle (~alex@c-24-5-86-101.hsd1.ca.comcast.net) joined #osdev
20:24:34 --- quit: bemeurer (Ping timeout: 255 seconds)
20:24:50 <Drakonis[m]> a fate worse than death...
20:25:00 <Drakonis[m]> i found a website called lobsters
20:25:05 <Drakonis[m]> seems kinda cool
20:25:06 <lkurusa> lobste.rs ?
20:25:16 <Drakonis[m]> yeah?
20:25:30 <lkurusa> It
20:25:39 --- quit: zeus1 (Ping timeout: 265 seconds)
20:25:40 <lkurusa> it’s like Hacker News, but better imho
20:26:00 <lkurusa> can’t really explain why, but the community is different I guess?
20:26:41 --- join: bemeurer (~bemeurer@2600:8802:5300:bb90:863a:4bff:fe06:c8b2) joined #osdev
20:26:46 <Drakonis[m]> seems like it?
20:26:49 <Drakonis[m]> has mods from what i can see
20:26:56 --- quit: bemeurer (Client Quit)
20:27:21 --- join: bemeurer (~bemeurer@2600:8802:5300:bb90:863a:4bff:fe06:c8b2) joined #osdev
20:28:27 --- join: drakonis (~drakonis@unaffiliated/drakonis) joined #osdev
20:29:45 <geist> oh that reminds me of this other funny site that i saw and now forgot but will find...
20:30:16 <drakonis> schroedinger's website
20:30:45 <geist> http://n-gate.com/
20:30:46 <bslsk05> ​n-gate.com: n-gate.com. we can't both be right.
20:31:00 <drakonis> ah i've heard about it
20:31:27 <drakonis> HN is so hilariously terrible
20:31:33 <lkurusa> reminds me of http://lkml.wtf/
20:31:36 <drakonis> seemed like it was about time there was something better
20:31:45 <drakonis> lkml.wtf is good, sucks that jess isn't updating it
20:32:03 <geist> oh yeah, seems to be not updated in the last year
20:32:10 --- quit: kuldeep (Remote host closed the connection)
20:32:28 <drakonis> "Google, the provider of reCAPTCHA, a service designed to prevent software from pretending to be people on the internet, brags about their software that pretends to be people on the phone. Hackernews reassures everyone that this is fine, since only idiots talk on the phone. The rest of the comments are all predictions of the amazing new world ushered in by robots lying on telephones. "
20:32:31 <drakonis> amazing...
20:33:01 --- join: kuldeep (~kuldeep@unaffiliated/kuldeepdhaka) joined #osdev
20:33:10 <lkurusa> ya I asked her at this year’s LCA
20:33:17 <lkurusa> she said she would but i guess she doesn’t have time :(
20:33:59 <lkurusa> sarcasm at its finest
20:34:14 <geist> yeah i guess sarcasm doesn't have an infinite well
20:34:39 <drakonis> it is really useful to obtain an concise week in the mailing list
20:34:57 <drakonis> there's some mesa guy trying to get remote kms
20:35:07 <drakonis> send the framebuffer through the wire
20:35:33 * lkurusa goes to sleep, ZzZzzZzzz
20:35:39 <drakonis> noice, good night
20:38:05 <bluezinc> /buffer 31
20:38:58 --- quit: MDude (Ping timeout: 256 seconds)
20:39:20 <drakonis> hmm, lobsters has irc
20:39:59 --- join: MDude (~MDude@c-73-187-225-46.hsd1.pa.comcast.net) joined #osdev
20:49:59 --- quit: _sfiguser (Quit: Leaving)
20:51:24 --- quit: drakonis (Remote host closed the connection)
20:55:32 --- join: epony (~nym@79-100-134-61.ip.btc-net.bg) joined #osdev
20:56:28 --- quit: lachlan_s (Quit: Connection closed for inactivity)
21:08:12 --- join: zeus1 (~zeus@197.239.5.50) joined #osdev
21:09:54 --- quit: plonk_ (Read error: Connection reset by peer)
21:10:01 --- quit: JonRob (Ping timeout: 255 seconds)
21:11:36 --- join: JonRob (jon@gateway/vpn/privateinternetaccess/jonrob) joined #osdev
21:11:52 --- join: plonk (~plonk@rosa.physik-pool.tu-berlin.de) joined #osdev
21:11:52 --- quit: plonk (Changing host)
21:11:52 --- join: plonk (~plonk@unaffiliated/plonk) joined #osdev
21:18:20 --- quit: unixpickle (Quit: My MacBook has gone to sleep. ZZZzzz…)
21:20:05 <geist> hmm, the netbsd cpu balancing code is pretty odd
21:21:01 <geist> https://github.com/NetBSD/src/blob/trunk/sys/kern/kern_runq.c#L568 seems to be the root of it, and that if i read it right would happen extremely often
21:21:04 <bslsk05> ​github.com: src/kern_runq.c at trunk · NetBSD/src · GitHub
21:21:38 <geist> it eventually dives into sched_balance(); which seems to update a global variable worker_ci
21:21:47 --- join: oaken-source (~oaken-sou@p5DDB4F5D.dip0.t-ipconnect.de) joined #osdev
21:21:52 <geist> none of this is held with a lock,s o i dunno precisely how multiple cpus dont race on the worker_ci var
21:22:40 <geist> so a) it's doing this fairly expensive O(n) rebalancing thing on every single idle event, that would be hyper aggressive
21:23:17 <geist> and b) dunno about this worker_ci thing, that seems like would be overwritten like crazy by different cpus running the same algorithm concurrently
21:25:28 <geist> even sort of weirder, it calls the same sched_balance() thing in the 'sched_nextlwp()' routine if the queue is empty
21:25:39 <geist> which implies that it would have probably already run this logic when it decided to go idle in the first place
21:26:17 <geist> and also there's a callout thats calling this routine as well,
21:29:15 <geist> so basically it's hyper balancing things between cores and being very racy about it
21:32:22 <geist> there mayyyy be a reason for it though, if concurrent versions of the sched_balance() are updating the victim cpu simultaneously, it may in general tend to avoid them deciding to steal threads from each other
21:34:45 --- quit: wgrant (Ping timeout: 256 seconds)
21:48:54 --- join: xenos1984 (~xenos1984@2001:bb8:2002:200:6651:6ff:fe53:a120) joined #osdev
21:49:10 --- quit: CrystalMath (Quit: Support Free Software - https://www.fsf.org/)
21:49:40 --- quit: dengke (Remote host closed the connection)
21:54:58 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
21:56:34 --- quit: pounce (Quit: WeeChat 2.1)
21:57:31 --- quit: variable (Client Quit)
21:59:05 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
22:02:46 --- join: cantbetouched (~SASLExter@gateway/tor-sasl/kernelbloomer) joined #osdev
22:04:01 <_mjg_> geist: i'm very serious right now
22:04:21 <_mjg_> geist: netbsd had last significant performance-related development around 2010-2011
22:04:27 --- quit: dustinm` (Quit: Leaving)
22:04:43 --- quit: variable (Quit: /dev/null is full)
22:05:07 <_mjg_> geist: most benchmamrks from that era that you will be able to find were testing on an 8 core system
22:05:17 --- quit: KernelBloomer (Ping timeout: 255 seconds)
22:05:45 <_mjg_> geist: i genuinely don't think there is anything to look for in their code when it comes to ideas
22:05:59 <_mjg_> and i say that after going through several key subsystems
22:06:04 <_mjg_> well, parts
22:06:32 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
22:06:48 <Drakonis[m]> it's not like it has any significant deployments or contributions
22:06:48 <_mjg_> (locking primitives, vfs and miscellaenous code in sys/kern)
22:06:55 <_mjg_> yes
22:07:17 <_mjg_> geist: not so long ago someone even ran into a bug where a *two* cpu system failed to rebalance the load
22:07:34 <Drakonis[m]> the most interesting thing seems to be the rump kernel thing
22:07:47 <_mjg_> if looking for ideas to steal, freebsd may be worth taking a look, but it really is not anything fancy either
22:07:54 <Drakonis[m]> can't help but think of butts when the world comes up
22:08:03 <Drakonis[m]> freebsd is too conservative
22:08:05 <geist> _mjg_: yeah i think i get it now, and it's an interesting example of a solution that may make sense in 2-4 cpu designs
22:08:05 <_mjg_> i have mixed feeling about it
22:08:10 <geist> but no way would hat scale
22:08:13 <Drakonis[m]> dflybsd seems better
22:08:35 <_mjg_> dillon made everal invasive changes and did not have to reach consensus with anyone to do so
22:08:36 <epony> how do fbsd and flbsd compare?
22:08:38 <_mjg_> but
22:08:44 <epony> dfl
22:08:52 <_mjg_> epony: dfly used to have a significant edge
22:09:01 <geist> i actually have similar pressure at work. there are situations where extremely temporarily we have 2 threads on one core while another one is idle, and in degenerate cases they can be fairly high prority 'real time' threads
22:09:06 <_mjg_> now i don't believe it does
22:09:09 <geist> but, you can't just rebalance the cpu cores every time they go idle
22:09:20 <_mjg_> sure
22:09:24 <_mjg_> scheduling is a bitch
22:09:37 <_mjg_> the bug i'm referencing resulted in NOT rebalancing at all
22:10:06 <geist> as it is its a bit odd because the sched_balance() call seems to update a global victim cpu pointer
22:10:14 <_mjg_> epony: tl;dr is that for fuck all reason freebsd suffered for years from next to no numa support in the vm and weirdly poor locking primitives
22:10:15 <geist> and seems to run concurrently, so it's a bit strange
22:10:44 <_mjg_> epony: this and a lot less high-profile stuff was fixed across last 2 years
22:10:45 <epony> some recently asked about numa on obsd misc list
22:10:55 <epony> (s.o)
22:11:01 <geist> i think ryzen is probably going to force it on everyone
22:11:05 <_mjg_> geist: i would not be surprised if it was buggy in this way
22:11:23 <_mjg_> epony: openbsd is still predominantly locked with one global lock
22:11:27 <epony> ye
22:11:28 <_mjg_> but they are making strides now
22:11:31 <epony> ye
22:11:33 <geist> _mjg_: i think it's 'safe' but basically it's a void * thats getting written to by multiple concurrent balacing tasks and at any one point you dont know which one updated it last
22:11:42 <Drakonis[m]> obsd is funny, theo proudly announced that they didn't use all of the cpu features
22:11:42 <_mjg_> geist: ye, in this sense
22:11:50 <geist> but at worse it's just stale
22:11:54 <_mjg_> geist: elsewhere you can find code which si "surprised" by smp
22:11:58 <epony> I remember the glock on fbsd too before not that many years
22:12:27 <_mjg_> epony: that said, it may be dfly has an edge in very specific benchmarks
22:12:46 <_mjg_> epony: but as a whole it is behind in general usability and perf
22:12:52 <epony> :-) or use cases where it's tuned to that use
22:13:06 <_mjg_> that once more is not surprising since the vast difference in manpower
22:13:07 <epony> compared to linux... a lot
22:13:17 <_mjg_> s/since/due to
22:13:38 <_mjg_> if anything, 1) dillon is clearly "overly" productive 2) fbsd has really weird stalls in development
22:13:43 --- quit: bemeurer (Ping timeout: 240 seconds)
22:14:13 <_mjg_> in fact i had very important patches for the namecache (fine-grained locking instead of a global(!)) which i could not be arsed to make committable for over a year
22:15:34 --- join: bemeurer (~bemeurer@185.236.200.248) joined #osdev
22:15:36 <_mjg_> geist: check out this smp gem
22:16:03 <_mjg_> kern/kern_veriexec.c:#define VERIEXEC_RW_UPGRADE(lock) while((rw_tryupgrade(lock)) == 0){};
22:16:06 <_mjg_> "[
22:16:15 <geist> woot
22:16:21 <_mjg_> juniper-ware
22:16:27 <_mjg_> i believe
22:16:53 <_mjg_> and yes, code using this can be reached by multiple readers
22:17:06 --- join: dustinm` (~dustinm@68.ip-149-56-14.net) joined #osdev
22:17:12 <_mjg_> so this is actively wrong, as opposed to just possibly super wasteful
22:17:19 <Drakonis[m]> and they say all those vendors leaning on fbsd is great
22:17:55 <Drakonis[m]> doesn't juniper have a library that outputs data in multiple formats?
22:18:25 <_mjg_> it does, libxo
22:18:40 <_mjg_> i have mixed feeling about it. kind of works, but they also plugged it into random stuff for no reason
22:18:41 <Drakonis[m]> the infamous "output in json" joke
22:18:50 <_mjg_> i can see the use in general
22:20:06 --- quit: freakazoid0223_ (Quit: The early bird may get the worm, but the second mouse gets the cheese)
22:20:22 <_mjg_> geist: see this, read from the bottom http://www.feyrer.de/NetBSD/bx/blosxom.cgi/index.front?-tags=scheduler
22:20:23 <bslsk05> ​www.feyrer.de: hubertf's NetBSD blog
22:20:59 <_mjg_> fun fact: both solaris and netbsd used to do *cpu* walks in lock slow path to see if the lock owner was perhaps running there
22:21:33 <Drakonis[m]> a bunch of fbsd userland utils use libxo for output
22:21:41 <_mjg_> yes, that's what i was referring to
22:21:50 <_mjg_> for some reason they even added it to ls, fortunately that got reverted
22:22:10 <Drakonis[m]> lmao wow
22:22:43 <Drakonis[m]> it's a little ironic tbh
22:23:26 <_mjg_> i see it in 11 tools
22:23:44 <_mjg_> including w(1)
22:24:00 <_mjg_> netstat is probably reasonable
22:24:06 <_mjg_> and vmstat
22:24:13 <_mjg_> but arp?
22:24:15 <Drakonis[m]> i agree
22:24:29 <Drakonis[m]> lol why arp
22:24:49 <Drakonis[m]> i have no idea why not just pipe the output into another tool
22:25:09 <epony> 2) why the stalls?
22:25:14 <_mjg_> added last year Sponsored by: Rubicon Communications (Netgate)
22:25:27 <_mjg_> epony: i can only speculate
22:26:02 <epony> (I was about to fall from my chair when I saw the sponsored commits some time ago in the web view of the repo)
22:26:23 <_mjg_> one part of which is /a fact/ is that the companies contributing back are weirdly selective
22:26:37 <_mjg_> in particular there was a time where only cosmetic stuff would make it back
22:26:44 <Drakonis[m]> the sponsored commits are pretty much appliance vendors
22:26:47 <epony> I knew it was getting into commercial territories for a long time, but it looked absurd.
22:26:55 <_mjg_> what do you mean?
22:27:14 <Drakonis[m]> me?
22:27:14 <klange> if someone wants to pay me to get their company name in a commit message, i'm all ears
22:27:42 <_mjg_> epony
22:27:47 <Drakonis[m]> ah ok
22:27:55 <_mjg_> sponsored commits are nothing new
22:28:17 <_mjg_> although some of their contents is questionable
22:29:01 <_mjg_> nonetheless, most of the crucial work done in recent years was sponsored
22:29:18 <_mjg_> (numa, various drivers, meltdown etc.)
22:29:31 <epony> so, effectively, almost no work done outside sponsored requests?
22:29:37 <_mjg_> why?
22:29:47 <_mjg_> probably *most* work was not sponsored
22:29:53 <epony> oh ok then
22:29:57 <_mjg_> big ticket items were sponsored, but that's not surprising
22:30:05 <_mjg_> given the required time investment
22:30:13 <_mjg_> you can't hack away numa support on weekends
22:30:16 --- join: wgrant (~wgrant@ubuntu/member/wgrant) joined #osdev
22:31:11 <epony> in this regard then fbsd is more like linux then
22:31:15 <_mjg_> that said, fbsd still has serious bottlenecks but is perfectly viable on modern hardware
22:32:34 <_mjg_> dfly is comparable in perf standpoint (with edge in certain cases, and behind in others)
22:32:44 <_mjg_> but it really loses in terms of general usability
22:32:50 <epony> and fbsd - linux?
22:32:52 <Drakonis[m]> it lacks the structured chaos that enables linux to evolve quickly
22:32:55 <_mjg_> linux is way ahead
22:33:15 <_mjg_> on modest hardware you wont be too behind by chosing freebsd
22:33:24 <_mjg_> but if you pick something > 128 threads
22:33:29 <_mjg_> you will be shooting yourself in the foot
22:33:48 <epony> in the lpc foot (as in not the hpc one)
22:33:53 <_mjg_> i'm nto taking out of my ass here, i work on freebsd 9)
22:34:23 <epony> I did follow it in production up to 9 and dropped it.
22:34:53 <_mjg_> there is no way fbsd is suitable for hpc deployments
22:34:59 <_mjg_> and i don't think it ever will be
22:35:04 --- quit: bemeurer (Ping timeout: 255 seconds)
22:35:10 * geist nods
22:35:12 <_mjg_> it's a bad target to try to reach anyway
22:35:17 <_mjg_> for this os i mean
22:35:20 <epony> Maybe 7-8 brought the accelerated dev cycle and taht coincided with my use cases changing
22:35:38 <_mjg_> if anything, the majority of real world deployments are wierdly small scale core-wise
22:35:55 <_mjg_> low double digit
22:36:14 <geist> perhaps running in VMs where the machine is carved up into smaller bites
22:36:31 <_mjg_> i checked stats from kernel core files reaching red hat
22:36:43 <_mjg_> there is abundance of 32 and less boxes
22:36:45 <Drakonis[m]> let's talk about cloud(butt) support on fbsd
22:36:55 <epony> most probably as a generalisation general availability products' workloads are not suitable and can't make any use of more than 4-8 cores
22:36:56 --- join: bemeurer (~bemeurer@2600:8802:5300:bb90:863a:4bff:fe06:c8b2) joined #osdev
22:37:13 <geist> well, that makes sense. the largest xeon is still under 32 cores per socket, so you'd have to get into dual and quad xeon setsup
22:37:20 <geist> and then you're starting to take a lot more money to burn on something
22:37:24 <_mjg_> yea
22:37:28 <epony> I'd argue they flop on their faces at 4 concurrent threads.
22:37:41 --- quit: cantbetouched (Ping timeout: 255 seconds)
22:37:42 <_mjg_> the point is that fbsd happens tobe perfectly fine at this real-world scale
22:37:53 <Drakonis[m]> how about the threadrippers and i9s of our time
22:37:53 <epony> yes, agreed
22:38:30 <Drakonis[m]> with double digit thread counts and price tags equal to organs
22:38:32 <epony> enthusiast game systems where games have not caught up..
22:38:43 <epony> nothing new under the sun
22:38:46 <Drakonis[m]> the consoles are catching up
22:38:58 <geist> i still kinda want to buy a home epyc box, but they're still hard to get ahold of
22:39:04 --- quit: climjark_ (Ping timeout: 268 seconds)
22:39:06 <Drakonis[m]> the next gen is expected to be double digits threads
22:39:23 <epony> memory is expensive yet
22:39:25 <Drakonis[m]> thanks to ryzen i suppose
22:39:41 <geist> yeah if AMD ets the contract i dont see how it couldn't be a zen derivative
22:39:49 <_mjg_> for intesred parties, netbsd really falls flat on its face. see this analysis of make -j 40 kernel build https://marc.info/?l=netbsd-tech-kern&m=150506124621943&w=2
22:39:53 <bslsk05> ​marc.info: 'performance issues during build.sh -j 40 kernel' - MARC
22:40:03 <_mjg_> to the best of my knowledge none of this got fixed to this day
22:40:12 <Drakonis[m]> Intel isn't going to take it
22:40:47 <geist> ah yes, reminds me i keep meanig to shove the core zircon locks in their own cache line
22:40:57 <Drakonis[m]> nvidia is notoriously difficult to work with, as seen in the previous generation
22:40:59 <geist> it's been a TODO on my list, but i kind of wanted a good benchmark so i can validate what i get
22:41:16 <geist> it's more fun to fix somethign and observe it get better than to assume it does
22:41:49 <_mjg_> there is an extra potential point of aligning *global* locks at 128
22:42:03 <Drakonis[m]> can zircon bootstrap itself
22:42:24 <geist> why do you think 128? because that's the L2 cache line size on some machines?
22:42:37 <Drakonis[m]> self hosting that is
22:42:40 <_mjg_> they all have a hardware prefetcher
22:42:42 <geist> Drakonis[m]: nah, probably wont for a while
22:42:52 <_mjg_> which if enabled *can* scrwe your over perf-wise
22:43:02 <_mjg_> probably not a big deal at your current scale
22:43:04 --- join: dbittman_ (~dbittman@2601:647:ca00:1651:b26e:bfff:fe31:5ba2) joined #osdev
22:43:11 <geist> yah probably not
22:43:16 <Drakonis[m]> ah, sucks
22:43:24 <_mjg_> geist: see https://reviews.freebsd.org/D15346
22:43:25 <bslsk05> ​reviews.freebsd.org: ⚙ D15346 Reduce false sharing in UMA on amd64 by increasing padding to 128 bytes
22:44:09 <geist> _mjg_: interesting. i have a cpu_align, which is set to 64 (basically the max cache line i'd expect anything to have)
22:44:24 <geist> though i have heard that core2 gen had 128 byte L2 cache lines, even if it advertized 64byte L1 caches
22:44:29 <Drakonis[m]> apparently there's a virtual private cloud for fbsd, it is in golang
22:44:42 <geist> or at least always prefetched 2 lines at a time
22:44:48 <_mjg_> geist: 64 is the standard recommended value indeed
22:45:03 <geist> also same on arm, though technically the max cache line size is whatever
22:45:04 <_mjg_> geist: but it is basically a tradeoff between space and false sharing
22:45:36 <geist> oh hey i had a microbenchmark question for you
22:45:57 <epony> I want a jail fixed on a couple of cores on a many core system :-)
22:45:59 <geist> say i have two algorithms that i need to switch at runtime based on some hardware detection, and it's otherwise hard to patch
22:46:20 <geist> one school of thought would be if (some bool) { algorithm 1; } else ...
22:46:44 <geist> and antother would be to use a function pointer and have two impls
22:47:10 --- quit: bemeurer (Ping timeout: 256 seconds)
22:47:14 <geist> arguably both have their advantages when yuo think about it. the function pointer one is certainly cleaner, but then on a cold branch predictor it probably can't see through it
22:47:18 <epony> Drakonis[m] should be renamed to 5 char nick :-)
22:47:27 <geist> but then it can generate more optimal code once it's in there, probably
22:47:44 <Drakonis[m]> possibly
22:47:55 <geist> otoh the if ... case is probably easier to predict and you can hint for one of the two that you think is the one you favor
22:47:55 <klange> five is too short, needs to be an even number
22:48:17 <_mjg_> geist: don't branch on it
22:48:32 <geist> so function pointer it?
22:48:41 <epony> klang :-)
22:48:46 <_mjg_> geist: you can do what linux does and overwrite the target space with the desired implementation
22:48:55 <geist> yeah yeah i know that's the best, but assume i can't do that
22:49:10 <_mjg_> in that case i strongly suspect the branch will be faster
22:49:21 <_mjg_> even if it is always mispredicted
22:49:27 <geist> especially if you can collapse the condition to a global bool
22:49:46 <_mjg_> there are weird anomalies with indirect jumps
22:49:51 <_mjg_> perf wise
22:49:54 <geist> a modern x86 may be able to branch predict through a function pointer even if it's cold, but a derpy arm most definitely couldnt
22:50:16 <_mjg_> still, why do you need something of this sort?
22:50:36 <_mjg_> arguably a totall hackish but completely functinal implementation of overwriting is easy to do
22:50:44 <geist> oh for example on a SMT vs non SMT machine the core selection logic may be somewhat simpler if it doesn't even consider that there are no pairs
22:51:05 <_mjg_> no i mean why you can't patch on boot
22:51:16 <klange> epony: how dare you insult my ancestors by dropping their sacred e - it separates us from the peasants!
22:51:22 <_mjg_> there are several things which can "tune" thsemselves based on found hardware
22:51:27 <geist> because at the time we know it's already too late, we've locked down the kernel
22:51:38 <_mjg_> oh?
22:51:50 <_mjg_> you got exploitation-prevention measures?
22:51:53 <geist> we have a period at the beginning of kernel boot where we can patch things, but the discovery of all of the cpus in the system comes somewhat later
22:52:04 <epony> klange uh oh yikes
22:52:18 <geist> well, usual lock down the kernel, ASLR relocate it, etc
22:52:23 <geist> it's nicer to do all the fixups early on
22:52:24 <_mjg_> hm
22:52:35 <geist> plus, i dont want to do what linux does for debugging purposes if anything else
22:52:38 <_mjg_> may i suggest you store this info for next boot
22:52:47 <geist> we do patchups, but it's mostly replacing branches with others
22:52:51 <geist> or nops with a branch
22:53:03 <_mjg_> unless amd is planning to support erms
22:53:06 <geist> not copying a whole routine to a target address. i'd rather not do that if i can avoid it
22:53:12 <_mjg_> you will need to patch for perf
22:53:22 <geist> we do patch memcpy and whatnot, but that's easily early discoverable
22:53:28 <_mjg_> ok
22:53:36 <geist> but detecting whether or not we have SMT or not requires ACPI and etc etc
22:53:42 <geist> so its' farther down
22:53:42 <_mjg_> that's a bummer
22:53:46 <klange> we langes have been at war with the langs for generations
22:54:06 * epony expects address space randomisations to not always be at a fixed time and be possible (or even automated) for long running machines
22:54:31 <_mjg_> hm
22:54:41 <_mjg_> maybe there is a lower level hack around this
22:54:55 <_mjg_> perhaps cpuid gives an indication whether smt got enabled
22:54:56 <geist> so my point is given between uysing a function pointer or a function that just has an if/else case in it, there's actually a fair case to argue for the if/else
22:55:05 <_mjg_> yea
22:55:08 <geist> assuming you only have 2 decisions. if there are 3 or 4 then the function pointer probably takes over
22:55:20 <_mjg_> yea
22:56:23 <_mjg_> a lot of the cpu perf counter support from intel is bsd-licensed
22:56:30 <_mjg_> you can probably steal that for fuchsia
22:56:38 <geist> yeah probably worth it
22:56:40 <_mjg_> you will still need to do the hard work
22:56:56 <geist> someone wrote some sort of utility for the giantic spew of stuff that intel does
22:56:58 --- join: Humble (~hchiramm@2405:204:d185:211c:838b:f448:9bcd:6c5a) joined #osdev
22:57:00 <_mjg_> but stuff like updating counters for new microarchs and whatnot is covered
22:57:00 <geist> uh whatever that's called
22:57:08 <_mjg_> pcm?
22:57:13 <geist> where you set a bigass physical buffer that it slams stuff into
22:57:17 <_mjg_> oh
22:57:34 <_mjg_> i don't know what that is
22:57:35 <geist> i'm going blank, but yeah we have support for that already
22:57:40 <_mjg_> i was thinking of somtething else
22:57:52 <_mjg_> anyway point is, you can ez benchmark how it behaves with this
22:58:17 <_mjg_> personally i wish that someone(tm) implemented an equivalent of perf stat on fbsd
22:58:21 <geist> right,b ut sort of my point is that intel is not the end all of things
22:58:30 <geist> i'm also thinking about highly in order dumb cpus, like cortex-a53
22:58:31 <_mjg_> ye i know, you need to worry about arm
22:58:42 <_mjg_> but measurement has to start somewhere
22:58:55 <_mjg_> how do you profile on arm?
22:59:00 <_mjg_> i mean in general
22:59:07 <geist> we dont. so there's a fair amount of 'what works for intel works for arm' which isn't really true
22:59:21 <_mjg_> does the cpu have perf counters akin to what intel provides?
22:59:22 <geist> though at the high level it's largely true
22:59:26 <geist> it does
22:59:26 <_mjg_> perhaps not necessarily as rich
22:59:53 <geist> just haven't found anyone to write it yet. there are predictably far more folks at work that are into x86 than arm
23:00:04 <_mjg_> ye
23:00:05 <geist> so it's a bit hard to find folks that really want to dive in and do debugging and perf on arm
23:00:38 <geist> and the lack of a lot of really good high end arm machines to run on has hurt things a bit. the new cavium boxes may help
23:00:43 <geist> whenever they come in
23:00:51 <_mjg_> there is some 96(?) core stuff
23:01:00 <_mjg_> thunderx or similar
23:01:08 <geist> yes we already have a pair of thunderx1s, 96 core, 2 sockets
23:01:09 <_mjg_> for some reason fbsd runs on it
23:01:21 <geist> we have x2s coming in now. 2x32x4 way SMT
23:01:46 <geist> and supposedly the cores are somewhat more OOO as well, the x1s are fairly in order cpus
23:02:23 <geist> saw xome benchmarks on em, they're fairly competitive it seems
23:02:46 <geist> https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/
23:02:48 <bslsk05> ​www.servethehome.com: Cavium ThunderX2 Review and Benchmarks a Real Arm Server Option
23:03:31 <geist> once i get some breathing room i totally want to bring up zircon on that
23:04:37 <geist> cavium i got quite a bit of respect for. they're going whole hog into this arm server space stuff. it's not some side project of a larger company
23:04:41 <geist> and their docs are quite good
23:16:04 --- quit: zeus1 (Ping timeout: 268 seconds)
23:16:38 --- quit: epony (Read error: Connection reset by peer)
23:17:02 --- join: epony (~nym@79-100-134-61.ip.btc-net.bg) joined #osdev
23:18:27 --- join: immibis (~chatzilla@222-155-160-32-fibre.bb.spark.co.nz) joined #osdev
23:19:00 --- quit: epony (Read error: Connection reset by peer)
23:19:18 --- join: epony (~nym@79-100-134-61.ip.btc-net.bg) joined #osdev
23:25:55 <geist> _mjg_: so the gist of the 128 thing you posted before is because it may prefetch from the previous line you really want *2?
23:26:08 <geist> but then wouldn't that imply that you want to put the actual target data a line into it?
23:26:30 <geist> ie, 64 bytes into a 128 byte block,. so that there's a whole cache line of prefetch before it?
23:28:43 <_mjg_> it will fetch the 'target' and the next/previos one, depending on which gives 128 alignment
23:29:17 <geist> yeah so i guess it works, but mostly for [1], [2], etc in an array
23:29:24 <geist> the 0th element has whatever in front of it
23:29:50 <_mjg_> perhaps it is unclera from the context, but with the patch the pcpu data is 128 byte aligned
23:30:08 <_mjg_> so cpu 0 is at 0x000, cpu1 is at 0x080 and so on
23:30:11 <geist> right, but the point was that the cpu may prefetch into the next line when it accessed the previous one, right?
23:30:33 <geist> (unless you pad them by 2 cache lines apart)
23:30:34 <_mjg_> it may prefetch next or previous, whatever will fit 128 byte alignment
23:31:00 <geist> but, nothing keeps it from prefetching from whatever is in front of cpu 0s
23:31:11 <_mjg_> it will not do it
23:31:15 <geist> you really would want a cache line of padding *before* the 0th one too
23:31:22 <_mjg_> if you read past 0x40 and before 0x80
23:31:25 <_mjg_> it will prefetch 0x00
23:31:38 <geist> oh oh. i see
23:31:48 <geist> it fetches the 128 byte aligned block that contains your line
23:31:54 <_mjg_> yea
23:32:07 <geist> is this just some behavior of the intel coherency protocol?
23:32:15 <geist> is it well known or just experimentally discovered?
23:32:29 <_mjg_> it is well known but "hidden"
23:32:40 <_mjg_> it sounds stupid, but:
23:32:46 <geist> may be for the same reason that it was on the core2: the L3 cache may actually be using 128 byte lines
23:32:52 <_mjg_> most of what you will read on intel ldocs/whatever will tell you to pad to 64
23:33:00 <_mjg_> but once you search for the prefetcher
23:33:02 <geist> yeha that's what i've seen
23:33:17 <_mjg_> they will tell you to either pad 64, 128 or disable it, depending on the workload
23:33:38 <geist> interesting. you know off te top of your head what the MONITOR/MWAIT size is for that hardware?
23:33:45 <geist> theoretically it can differ from the cache line size too
23:33:49 <_mjg_> nope
23:34:03 <_mjg_> i don't know why tye 64 byte is so prevalent without any asterisks, so to speak
23:34:16 <_mjg_> if you look around you will see that people do pad to 128 plenty of times
23:34:31 <geist> it seems to just the standard thing. 32 and 64 have pretty historically been cache line sizes over the last 10-20 years
23:34:36 <_mjg_> so basically "everyone" knows
23:34:49 <geist> arm was 32 and 64 for the longest time, but 64 now seems to be pretty much standard
23:34:59 --- quit: quul (Quit: WeeChat 2.0.1)
23:35:03 <_mjg_> interestingly, linux pads the global structures by 64
23:35:43 <geist> in the case of the cpu arrays, seems that the behavior would be problematic as it crosses between where node 0 and node 1s cpu are
23:35:53 <geist> *assuming* the kernel numbers them in increasing node order
23:35:57 <geist> which i guess is not always the case
23:36:03 <_mjg_> it was super defficient in fbsd case here
23:36:07 <_mjg_> with the padding of 64
23:36:17 <_mjg_> the array started NOT aligned to 64
23:36:23 <geist> does freebsd happen to number the cores like node 0, node 1, node 0, node 1?
23:36:34 <geist> oh not aligned would be bad all around
23:36:48 <_mjg_> which meant lines were bouncing across sockets a lot
23:37:10 <_mjg_> erm, the beginning of the array was NOT aligned to 128
23:37:14 <geist> absolutely but if the core probably accesses its own stuff far more often, there's also some value to it being on the home node
23:37:32 <geist> but if it does have an aliasing problem, hopefully the neighboring cpu #s are on the same node
23:37:34 <_mjg_> oh the real fix is to utilize node-local pages for this stuff
23:37:52 <_mjg_> no argument here
23:37:57 <geist> yeah, i've debated that a bit, but then you dont get these nice pretty arrays like that
23:38:03 <geist> without doing some page crossing nonsense
23:38:06 <_mjg_> these arrays are fucking terrible
23:38:17 <_mjg_> what you can do is this:
23:38:25 --- join: sixand (~Thunderbi@113.201.117.255) joined #osdev
23:38:43 <geist> an array of pointers to structures i assume is not great either
23:38:45 <_mjg_> you create a dedicate per-cpu per-node allocator which returns an index
23:39:11 <_mjg_> this index is an offset into your local per-cpu are
23:39:12 <_mjg_> a
23:39:14 <geist> right
23:39:24 <_mjg_> and there you go, no avoidable traffic whatseover
23:39:31 <_mjg_> well mostly L)
23:39:45 <geist> sure, but when you want to access another core's data (which you should avoid, but sometimes you gotta do) you need to be able to look them up
23:39:48 --- join: grouse (~grouse@83-233-9-2.cust.bredband2.com) joined #osdev
23:39:57 <_mjg_> that's also easy
23:39:58 <geist> but then maybe the array of pointers to per cpu arrays is an acceptable evilt here
23:40:08 <_mjg_> you can have a global array with starting points of per-cpu areas
23:40:16 <geist> right.
23:40:24 <_mjg_> but you just don't use it when accessing your own stuff
23:40:28 <geist> right
23:40:40 <geist> yeah that's basically what i was thinking of doing here soon on zircon too
23:40:49 <_mjg_> ye it is kind of natural
23:41:01 <geist> since i was just thinking of redoing the way the per cpu structure is done. it's at least aligned on 64 byte boundaries and whatnot, but it's statically allocated, etc
23:41:12 <_mjg_> even ignoring wasted space due to padding, there is just extra latency induced by these arrays
23:41:12 <geist> and can always use gs:0 or whatnot to find your local one
23:41:23 <_mjg_> if you evict your entry from thecache, you then have to fetch it from the remote node
23:41:26 <_mjg_> total crap
23:41:31 * geist nods
23:41:54 <geist> i think i'll do that, i was hoping there was something nicer than havint to have the array of pointers to do cross cpu lookup, but at the end of the day you cant do it perfectly
23:42:08 --- quit: jjuran (Ping timeout: 256 seconds)
23:42:15 <geist> and that gives you freedom to allocate cpu #s however you want
23:42:17 <_mjg_> well depends what you are looking for here
23:42:45 <geist> right, obviously you should optimize for local cpu struct lookup, since 95% of it should be that
23:42:49 <_mjg_> the cpu index -> local area array is read-only so not a big deal to read it
23:42:57 <geist> but n the cases where you need to go cross cpu you should still be able to O(1) go find it
23:43:01 <geist> right
23:43:08 <_mjg_> if anything, the more often you want to read foreign stuff
23:43:12 --- quit: sixand (Ping timeout: 268 seconds)
23:43:21 <_mjg_> the more you are perhaps wnt to do something else
23:43:24 <_mjg_> s/are//
23:43:25 <geist> exactly.
23:43:42 <_mjg_> the stadnard reason to read other cpus is to roll up counters
23:43:47 --- quit: Ryanel (Quit: Leaving)
23:43:58 <geist> and in that i'm also trying to make sure i arrange for data in the array to be separated by stuff that gets updated a lot, stuff that is mostly read only from other cores, etc
23:44:00 <_mjg_> effects of this can be partially combated by rolling up everyone from given node
23:44:20 <geist> right, we have a kstats that we bump counters on the local nodes and then roll them up when we want to present them to user space
23:44:51 <_mjg_> i mean, someone from given node rolls everyone else up
23:44:54 <geist> those already go in their own per cpu struct
23:45:03 <_mjg_> but not from other nodes
23:45:04 <geist> right
23:45:23 <_mjg_> these crap counters are overrated anyway
23:45:28 <geist> but even if it does, it's debugging code, and it's still in a read only way from a foreign nodes
23:45:32 --- join: jjuran (~jjuran@c-73-132-80-121.hsd1.md.comcast.net) joined #osdev
23:45:36 <geist> so it's not a full eviction
23:46:03 <_mjg_> you are still inducing a state transition
23:46:08 <_mjg_> kind of crap
23:46:08 <geist> oh 100%
23:46:11 <geist> indeed
23:46:20 <_mjg_> this stuff really weirdly hurts
23:46:28 <_mjg_> fucking legacy codebases are full of this shit
23:46:39 <geist> reminds me too, i need to eventually sort out what to do about the purely global cpu bitmaps that are too inconvenient
23:46:42 <geist> er convenient
23:46:55 <geist> like, the bitmap that tracks if a cpu is idle or not. it's a hole in the cache
23:46:58 <geist> since everyone updates it atomically
23:47:07 <geist> eventually i need to find a better solution for that
23:47:08 <_mjg_> uh
23:47:45 <_mjg_> so you were looking at netbsd for ideas?
23:47:56 <geist> no no, that's just old expediant stuff
23:47:59 <_mjg_> :)
23:48:17 <_mjg_> dfly may be worth looking at, but i have to alert you that their code is weirdly crude
23:48:20 <geist> trouble is the zircon scheduler acts more like NT than it acts like unix scehdulers: when you wake up a thread it then and there decides what core to run it on
23:48:34 <geist> instead of doing what most unices seem to do: run it whever it ran last and let the balancer sort it out
23:48:36 <_mjg_> huh
23:49:02 <geist> so in order to decide what core to run it on it looks at the idleness of the system and tries a weighted bias thing to decide
23:49:09 <geist> but it needs to know what cores are idle to do it
23:49:33 <_mjg_> i don't know much about schedulers, seems weirdly backwards though
23:49:46 <geist> linux/unix seems weirdly backwards to me
23:49:49 <_mjg_> i mean where is cpu topology coming into play here
23:50:22 <geist> as it decides it tries to pick nearest cores from where it ran last, unless it hasn't run in a long time and thus is cold
23:51:03 <geist> it's not fundamentally different, the question is when do you decide. linux errs on the side of running it where it ran last now, and then let the balancer sort it out in N jiffies
23:51:12 <geist> vs finding the optimal one to run it on *now*
23:51:46 <geist> topology kicks in when trying to decide what core to run it on, and if it can't find the one it ran on last then try to find the nearest, etc
23:52:04 <geist> it's kind of a real time vs throughput thing
23:52:24 <_mjg_> ye it is tradeoffs all the way down
23:52:45 <geist> but that idle bitmap is a problem that i will need to sort out to scale up
23:52:53 <geist> so i have a wary eye on it
23:53:56 <geist> this is what NT traditionally was known to do, and from observing macos it seems to do something similar
23:54:05 <geist> it is very aggreessive about bouncing threads around
23:54:36 --- quit: dbittman_ (Ping timeout: 276 seconds)
23:55:19 <_mjg_> do you know a decent scheduler benchmark to look at?
23:55:49 <geist> i dont
23:56:22 <_mjg_> i read some schedulelr people (linux, freebsd) and they all think their solution is the best
23:56:25 <_mjg_> 8)
23:56:35 <geist> yah. and it depends a lot on what you're optimizing for
23:56:53 <geist> the linux solution (CFS) is darn clever, but it definitely is a throughput/fairness thing
23:57:04 <geist> and less of a real time, low latency thing
23:57:14 <_mjg_> i kind of gave up on this part of osdev, way too murky
23:57:23 <_mjg_> in terms of what makes sense
23:57:46 <geist> same. i just want something that works well enough for now, and not what theoretically works forever. since schedulers tend to be fairly compartmentalized, i'm not sweating it
23:58:03 <geist> as long as it can be replaced with another implementation later and there aren't any base assumptions baked into the system i'm fine
23:58:30 <geist> it's things that i should look out for now that will bite us design wise N years down the road tha ti worry about. data structure placement and whatnot is one of those
23:58:34 <geist> since that's harder to fiddle with later
23:59:00 <_mjg_> ye
23:59:30 <_mjg_> i really need to get someone to implement NOT merging data section across .o files
23:59:57 <_mjg_> i.e. when grouping data leave it padded at least by 64 between .o files
23:59:59 --- log: ended osdev/18.05.17