Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev&y=18&m=1&d=8

Monday, 8 January 2018

00:00:00 --- log: started osdev/18.01.08
00:01:31 --- join: nzoueidi (~nzoueidi@ubuntu/member/na3il) joined #osdev
00:04:55 --- join: xerpi (~xerpi@238.red-83-45-192.dynamicip.rima-tde.net) joined #osdev
00:09:19 --- quit: JackyLi (Remote host closed the connection)
00:12:38 --- quit: user10032 (Quit: Leaving)
00:16:34 --- quit: fusta (Ping timeout: 240 seconds)
00:18:16 --- join: fusta (~fusta@139.179.50.177) joined #osdev
00:18:33 --- quit: lokust (Ping timeout: 276 seconds)
00:22:35 --- join: lokust (~smuxi@li1022-219.members.linode.com) joined #osdev
00:25:20 --- quit: fusta (Ping timeout: 240 seconds)
00:28:16 --- quit: alphawarr1or (Quit: Connection closed for inactivity)
00:29:30 --- quit: RyuKojiro (Read error: Connection reset by peer)
00:29:44 --- join: RyuKojiro (~kojiro@c-24-23-203-58.hsd1.ca.comcast.net) joined #osdev
00:30:36 --- join: fusta (~fusta@139.179.50.177) joined #osdev
00:52:29 --- quit: lutoma (Quit: WeeChat 1.9.1)
00:53:16 --- join: lutoma (~lutoma@wikipedia/lutoma) joined #osdev
00:58:56 --- join: _ether_ (~ether@host.126.101.23.62.rev.coltfrance.com) joined #osdev
01:03:40 --- join: bm371613 (~bartek@2a02:a317:603f:9800:391:ed9f:1c09:1285) joined #osdev
01:04:28 --- join: CheckDavid (uid14990@gateway/web/irccloud.com/x-afyyhcgzewnknuka) joined #osdev
01:09:55 --- quit: ornitorrincos (Quit: ZNC 1.6.5 - http://znc.in)
01:10:59 --- join: ornitorrincos (~ornitorri@163.172.62.96) joined #osdev
01:10:59 --- quit: ornitorrincos (Changing host)
01:10:59 --- join: ornitorrincos (~ornitorri@unaffiliated/ornitorrincos) joined #osdev
01:17:44 --- quit: lordPoseidon (Ping timeout: 248 seconds)
01:21:30 --- join: sgautam (~sgautam@59.182.255.141) joined #osdev
01:22:15 --- nick: sgautam -> GautamS
01:23:52 <GautamS> Huh. I just installed root, (I forgot about su, haha.), and apparently it's a program from CERN.
01:24:42 <GautamS> I read about it, but I don't understand what's its used for. I mean, it appears to be a C/C++ interpreter, but it's supposed to be used for much more.
01:25:36 --- join: lordPoseidon (~lordPosei@14.139.56.18) joined #osdev
01:25:38 <moondeck[m]> Wait, whut? su?
01:25:45 <moondeck[m]> I believe su is a unix utility
01:25:48 <GautamS> Yeah.
01:26:09 <GautamS> I wanted to login as root in my terminal.
01:26:20 <GautamS> Ended up running "sudo apt-get install root"
01:26:46 <moondeck[m]> you dont need to install root lol
01:26:53 <moondeck[m]> ah, wait, now i understand your first statement
01:27:04 <moondeck[m]> install root lmao
01:27:11 <GautamS> Yup. Root doesn't do what I thought it would.
01:27:26 <GautamS> Opened up a interpreter that runs C pretty nicely.
01:27:27 <_mjg> there is a file manager named git
01:28:15 <moondeck[m]> naming a package root is batshit crazy
01:28:45 <geist> CERN!
01:28:55 <moondeck[m]> CERN is awesome
01:29:04 <GautamS> https://root.cern.ch/
01:29:04 <bslsk05> ​root.cern.ch: ROOT a Data analysis Framework | ROOT a Data analysis Framework
01:29:39 <moondeck[m]> aaaa, C++
01:29:45 * moondeck[m] jumps out the window
01:29:51 <GautamS> lol
01:30:08 <geist> join john titor in saving us from CERN
01:30:24 <_mjg> there is a debugger named crash. have fun filtering results.
01:30:33 <moondeck[m]> john titor?
01:30:49 <moondeck[m]> oh that dude
01:32:05 <geist> exactly
01:32:10 <GautamS> _mjg: Oh man. We need error, warning, and ok next.
01:33:04 <moondeck[m]> ok, a program that returns 0
01:33:19 <moondeck[m]> possible feature would be writing "ok" on the screen
01:35:35 --- quit: Humble (Ping timeout: 265 seconds)
01:36:32 --- quit: hmmmm (Remote host closed the connection)
01:36:56 --- quit: fusta (Ping timeout: 248 seconds)
01:38:24 --- join: fusta (~fusta@139.179.50.177) joined #osdev
01:44:54 --- join: john51_ (~john@15255.s.t4vps.eu) joined #osdev
01:45:21 --- quit: zwliew (Quit: Connection closed for inactivity)
01:46:50 --- quit: john51 (Ping timeout: 240 seconds)
01:53:37 --- quit: k4m1 (Quit: brb reboot)
01:53:59 --- join: glauxosdever (~alex@ppp-94-66-36-167.home.otenet.gr) joined #osdev
01:59:21 --- join: eremitah_ (~int@unaffiliated/eremitah) joined #osdev
02:00:14 <Brnocrist> I never thought that O_DIRECT could be a security feature :D
02:01:32 --- quit: FreeFull ()
02:02:50 --- quit: eremitah (Ping timeout: 240 seconds)
02:02:50 --- nick: eremitah_ -> eremitah
02:08:48 --- quit: immibis (Ping timeout: 260 seconds)
02:09:34 --- quit: caen23 (Quit: leaving)
02:09:56 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
02:12:59 --- join: eivarv (~eivarv@cm-84.215.4.97.getinternet.no) joined #osdev
02:14:14 --- quit: GautamS (Quit: GautamS)
02:22:03 --- quit: rap_hael (Quit: WeeChat 1.9.1)
02:22:18 --- join: raph_ael (~raphael@2a00:dcc0:dead:a066:216:3cff:feae:868a) joined #osdev
02:25:05 --- quit: xerpi (Remote host closed the connection)
02:30:28 --- join: Humble (hchiramm@nat/redhat/x-kylvkylptscmufbu) joined #osdev
02:32:48 --- join: xenos1984 (~xenos1984@2001:bb8:2002:200:6651:6ff:fe53:a120) joined #osdev
02:46:32 --- quit: Belxjander (Ping timeout: 268 seconds)
02:47:39 --- join: zwliew (uid161395@gateway/web/irccloud.com/x-wjqkahsrcyayrktv) joined #osdev
02:51:32 --- join: aesycos (~aesycos@mobile-166-177-120-18.mycingular.net) joined #osdev
02:52:14 --- quit: sferrini (Read error: Connection reset by peer)
02:53:14 --- join: sferrini (sid115350@gateway/web/irccloud.com/x-secoscmohsqykswp) joined #osdev
02:53:20 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
03:07:35 --- join: OXCC (~p1@ip5451817e.direct-adsl.nl) joined #osdev
03:12:39 --- part: OXCC left #osdev
03:15:42 --- join: dennis95 (~dennis@p50915F28.dip0.t-ipconnect.de) joined #osdev
03:18:35 --- join: vmlinuz (~vmlinuz@2804:431:f725:11c:c664:e88f:2d37:3d9c) joined #osdev
03:18:35 --- quit: vmlinuz (Changing host)
03:18:35 --- join: vmlinuz (~vmlinuz@unaffiliated/vmlinuz) joined #osdev
03:20:17 --- join: quc (~quc@host-89-230-168-166.dynamic.mm.pl) joined #osdev
03:21:46 --- quit: aesycos (Remote host closed the connection)
03:22:00 --- join: s0n1cB00m (~s0n1cB00m@cpc83309-brig21-2-0-cust565.3-3.cable.virginm.net) joined #osdev
03:23:02 --- quit: CheckDavid (Quit: Connection closed for inactivity)
03:27:52 --- quit: jjuran (Ping timeout: 248 seconds)
03:28:54 --- join: jjuran (~jjuran@c-73-132-80-121.hsd1.md.comcast.net) joined #osdev
03:29:25 --- join: k4m1 (~k4m1@82-181-0-34.bb.dnainternet.fi) joined #osdev
03:30:18 --- quit: hunterlabs_ (Ping timeout: 276 seconds)
03:34:45 --- join: aesycos (~aesycos@mobile-166-177-120-18.mycingular.net) joined #osdev
03:35:38 --- quit: regreg_ (Read error: Connection reset by peer)
03:36:05 --- join: regreg_ (~regreg@85.121.54.224) joined #osdev
03:36:32 --- quit: regreg_ (Read error: Connection reset by peer)
03:37:00 --- join: regreg_ (~regreg@85.121.54.224) joined #osdev
03:43:01 --- join: alphawarr1or (uid243905@gateway/web/irccloud.com/x-enftxdxqxyfnskor) joined #osdev
03:50:02 --- quit: quc (Remote host closed the connection)
03:50:19 --- join: quc (~quc@host-89-230-168-166.dynamic.mm.pl) joined #osdev
03:55:09 --- quit: raphaelsc (Remote host closed the connection)
03:55:47 --- join: hunterlabs (Elite20801@gateway/shell/elitebnc/x-uwpqtaqjjhexileh) joined #osdev
04:15:27 --- quit: k4m1 (Remote host closed the connection)
04:17:45 --- join: vaibhav (~vnagare@125.16.97.127) joined #osdev
04:27:35 --- quit: s0n1cB00m (Remote host closed the connection)
04:27:58 --- quit: Belxjander (Ping timeout: 240 seconds)
04:30:31 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
04:40:11 --- join: s0n1cB00m (~s0n1cB00m@cpc83309-brig21-2-0-cust565.3-3.cable.virginm.net) joined #osdev
04:45:51 <rain1> hello os dev
04:48:26 --- join: zng (~zng@ool-18ba49be.dyn.optonline.net) joined #osdev
04:51:56 <sham1> Hi
05:00:15 --- quit: awang (Ping timeout: 252 seconds)
05:08:07 --- quit: listenmore (Remote host closed the connection)
05:08:30 --- join: listenmore (~strike@2.27.123.231) joined #osdev
05:08:37 --- quit: s0n1cB00m (Remote host closed the connection)
05:09:09 --- join: s0n1cB00m (~s0n1cB00m@cpc83309-brig21-2-0-cust565.3-3.cable.virginm.net) joined #osdev
05:10:47 --- quit: s0n1cB00m (Read error: Connection reset by peer)
05:11:11 --- join: s0n1cB00m (~s0n1cB00m@cpc83309-brig21-2-0-cust565.3-3.cable.virginm.net) joined #osdev
05:16:23 --- quit: s0n1cB00m (Ping timeout: 268 seconds)
05:25:14 --- quit: Humble (Ping timeout: 240 seconds)
05:26:43 --- join: awang (~awang@rrcs-24-106-163-46.central.biz.rr.com) joined #osdev
05:27:38 --- join: heat (~heat@sortix/contributor/heat) joined #osdev
05:27:49 --- join: s0n1cB00m (~s0n1cB00m@host-92-14-56-128.as43234.net) joined #osdev
05:29:59 --- join: FreeFull (~freefull@defocus/sausage-lover) joined #osdev
05:34:03 --- join: m_t (~m_t@p5DDA3D03.dip0.t-ipconnect.de) joined #osdev
05:46:26 <gamozo> Hi
05:51:18 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
05:54:05 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
05:55:21 --- quit: zwliew (Quit: Connection closed for inactivity)
06:05:06 --- quit: s0n1cB00m (Remote host closed the connection)
06:05:27 --- join: s0n1cB00m (~s0n1cB00m@host-92-14-56-128.as43234.net) joined #osdev
06:14:51 --- join: Vercas-web (d920cbb2@gateway/web/freenode/ip.217.32.203.178) joined #osdev
06:14:57 --- quit: Belxjander (Ping timeout: 265 seconds)
06:14:57 --- join: John___ (~John__@79.97.140.214) joined #osdev
06:17:49 --- quit: s0n1cB00m ()
06:19:06 --- quit: FMan (Quit: Leaving)
06:19:49 <Vercas-web> Is PKE available on all Intel processors..?
06:20:07 <Vercas-web> I can't find any way to test if CR4.PKE and associated instructions are actually implemented.
06:20:50 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
06:21:35 --- join: ftantic (~Thunderbi@163.179.236.139) joined #osdev
06:23:36 --- join: sgautam (~sgautam@59.182.248.0) joined #osdev
06:23:37 --- nick: sgautam -> GautamS
06:28:35 <GautamS> hahaha
06:29:18 <GautamS> I just laughed at a message from 5 hours ago. Urgh
06:29:29 <GautamS> "Brnocrist I never thought that O_DIRECT could be a security feature :D"
06:29:36 <GautamS> Polari is a terrible IRC client.
06:30:29 --- join: nerdopoly (~nerdopoly@144.32.240.132) joined #osdev
06:33:13 --- join: mmu_man (~revol@vaf26-2-82-244-111-82.fbx.proxad.net) joined #osdev
06:33:53 --- join: Lowl3v3l (~Lowl3v3l@dslb-088-075-089-098.088.075.pools.vodafone-ip.de) joined #osdev
06:41:17 <alexday> :D
06:42:07 <alexday> does anyone know someone who knows anyone about writing search engines! and has a big heart :P to help
06:42:42 --- quit: alphawarr1or (Quit: Connection closed for inactivity)
06:45:01 --- join: sdfgsdf (~sdfgsdfg@unaffiliated/sdfgsdfg) joined #osdev
06:47:43 --- quit: GautamS (Quit: GautamS)
06:48:38 --- join: Humble (~hchiramm@2405:204:5706:85a4:82b2:5197:f19d:5ca) joined #osdev
06:51:54 --- quit: m_t (Remote host closed the connection)
06:52:13 --- join: m_t (~m_t@p5DDA3D03.dip0.t-ipconnect.de) joined #osdev
06:54:34 --- join: Myx0 (~Myx0@83.169.216.180) joined #osdev
06:59:21 --- part: Myx0 left #osdev
07:00:48 --- quit: xenos1984 (Quit: Leaving.)
07:05:17 --- quit: ftantic (Quit: ftantic)
07:06:50 --- quit: John___ (Ping timeout: 240 seconds)
07:07:31 --- join: alphawarr1or (uid243905@gateway/web/irccloud.com/x-kdcqramijkysdord) joined #osdev
07:08:07 <alphawarr1or> Hello everyone. What do you all think about meltdown and spectre?
07:09:13 <rain1> thought it was interesting how nothing realy changed much
07:09:35 <rain1> a bunch of doomsayers were talking about how the internet is gonna collapse
07:09:59 <rain1> also seems to be that it's the most widespread bug ever, becaues it occured at the lowest level
07:10:02 <alphawarr1or> oh but it's also said that the patch can slow pcs a lot
07:10:16 <rain1> yeah they slowed a bit sure but all the lights are still on
07:10:22 <alphawarr1or> I mean I've seen a benchmark with redis and it yielded around 30%
07:10:34 <alphawarr1or> that's a lot sadly
07:10:34 <rain1> it's just surprising to me how much 'damage' the internet can take, yet still work
07:10:46 <alphawarr1or> oh what do you mean?
07:10:59 <alphawarr1or> well yeah this one is painful esp since I have an old cpu...
07:11:16 <rain1> well another aspect of it
07:11:26 <rain1> you already need code execution on the target machine, in order to use these exploits
07:11:33 <rain1> having code exec is like... kind of abig deal already
07:11:45 <alphawarr1or> well yeah...
07:11:48 <rain1> it's not like heartbleed or shellshock where you can just hit any server
07:11:56 <rain1> you have to actually chain it with another exploit
07:12:27 <alphawarr1or> well yeah but since linux merged the fix + ms is working on one I guess it's gonna be painfull for most low tier CPUs
07:12:30 <rain1> (there was the idea of going horizontally on a hypervisor though)
07:12:41 <rain1> yeah
07:12:43 <rain1> regarding CPUs
07:12:46 <alphawarr1or> well also most android phones won'tbe updated...
07:12:53 <rain1> i am very curious about if this will change CPU design
07:13:03 <alphawarr1or> oh well me too
07:13:04 <rain1> they have all these textbooks how to implement various aspects of a CPU
07:13:11 <rain1> i guess tehy are going to have rewrite parts :)
07:13:17 <alphawarr1or> well ofc
07:13:21 <alphawarr1or> but what could fix this bug?
07:13:39 <alphawarr1or> they'd have to change branch prediction right?
07:13:53 <rain1> im not sure
07:14:06 <rain1> i dont 100% understand the details of the exploit
07:14:11 <alphawarr1or> well Istill don't understand how these attacks actualy work either
07:14:23 <alphawarr1or> i've just seen some sample code
07:14:24 <rain1> but it seems to be that during out of order execution, stuff outside of the processses priv. level was being cached
07:14:38 <alphawarr1or> but we can't read the cache can we?
07:15:05 <alphawarr1or> well yeah my rpi's cpu is protected as it doesn't do out of order execution
07:15:37 <rain1> you can't directly read the cache
07:15:48 <rain1> but they used a timing trick to indirectly see it
07:15:52 --- join: NotSecwitter (~NonSecwit@unaffiliated/nonsecwitter) joined #osdev
07:15:52 <alphawarr1or> oh
07:16:03 <alphawarr1or> so they measured the time to read it
07:16:06 <rain1> yeah
07:16:38 --- quit: uvgroovy (Ping timeout: 248 seconds)
07:17:11 <alphawarr1or> oh well that's nice...
07:18:30 <alphawarr1or> btw why does the patch slow the os down?
07:18:48 <rain1> again i don't know for certain
07:19:18 <rain1> but i think what the patch does is roughly: when you switch between userspace and kernel code, it's doing a bit of extra work to cover up the kernel pages
07:19:31 <rain1> there might be more to it.. that could be totally wrong too
07:19:39 <alphawarr1or> oh I see
07:20:05 <alphawarr1or> well then lots of more work when an interrupt occurs...
07:23:15 --- quit: nerdopoly (Quit: Hasta luego)
07:26:25 --- join: John___ (~John__@79.97.140.214) joined #osdev
07:27:18 --- quit: Belxjander (Ping timeout: 248 seconds)
07:27:58 --- quit: azonenberg_work (Ping timeout: 255 seconds)
07:29:35 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
07:33:15 --- join: hmmmm (~sdfgsf@pool-72-79-169-212.sctnpa.east.verizon.net) joined #osdev
07:35:55 <Vercas-web> alphawarr1or: World's on fire, but not everyone's a fire fighter. :P
07:36:26 <Vercas-web> rain1: Microsoft's fix has been rolled out already.
07:36:33 <alphawarr1or> oh I see
07:36:57 <amigara> Cache coloring when
07:37:04 <Vercas-web> My (much) more senior colleagues have had a helluva time dealing with Spectre and its consequences.
07:37:08 <alphawarr1or> well i'm just an IT student XD so I know nothing on how to fix a fire but I can reinstall windows for you
07:37:29 <alphawarr1or> vercas why? were you attacked?
07:38:10 <Vercas-web> Nope, but we work with Windows, and we're a security company. So we're in the crossfire.
07:38:20 --- quit: daniele_athome (Ping timeout: 240 seconds)
07:38:45 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
07:39:15 <Vercas-web> + we do drivers, and hypervisors... So this dumpster fire affects us on every level.
07:39:40 <amigara> Did you guys get any forewarning?
07:39:59 <alphawarr1or> oh wow well yeah then it must have hit you ppl hard there
07:40:05 <Vercas-web> amigara: You know I couldn't tell you even if we did, right?
07:40:20 <Levex> hello #osdev!
07:40:29 <rain1> hiu
07:40:34 <alphawarr1or> hello
07:40:38 <amigara> lawl
07:40:39 <Vercas-web> I'm under so many NDAs, at this point it's safer to say nothing at all.
07:40:45 <Vercas-web> (about my work)
07:41:09 <amigara> So what's your opinion on GPZ anyways?
07:41:16 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
07:41:43 <Vercas-web> I think Google did a very unusual move with Spectre.
07:41:54 <Vercas-web> Also, now I'm convinced that Google and NSA aren't butt buddies.
07:41:59 <amigara> By releasing early?
07:42:10 <Vercas-web> There's much more to this story, man.
07:42:16 <amigara> ik
07:42:30 <amigara> It's a living meme
07:43:00 <Vercas-web> I think they really shouldn't have kept such a broad embargo.
07:43:26 <amigara> Why?
07:44:30 <Vercas-web> Because it would've given major service providers more time to fix their crap before Google was *forced* to disclose the problem.
07:45:03 <Vercas-web> Google should've known that people who look at Linux commits aren't braindead.
07:45:28 <amigara> (Or even read the KAISER paper w/ a microarch foundation)
07:45:34 <alphawarr1or> is that really true that some of these attack were already known for al ong time?
07:45:48 <amigara> The class of vuln, yes
07:46:02 <Vercas-web> alphawarr1or: I dunno if the exact date is public or not... But yes.
07:46:07 <amigara> Can't speak to the recent kerfuffle but there's a lot of lit on cache timing attacks
07:46:10 <Vercas-web> Quite a long time for such a bug.
07:46:36 <amigara> And at the end of the day, this is just an extrapolation of the aforementioned class
07:46:45 <Vercas-web> Nnnnnnnnno, it's not.
07:47:16 <Vercas-web> Cache timing attack is just a tool used to extrapolate results from a Spectre exploit.
07:48:00 <alphawarr1or> sorry to ask but what exactly is specte? I've read a paper but Ican't understand how it works
07:48:59 <Vercas-web> It's an exploit that uses speculative execution and branch prediction in order to extract data that the CPU would otherwise forbid.
07:49:15 --- join: xenos1984 (~xenos1984@22-164-191-90.dyn.estpak.ee) joined #osdev
07:49:17 <alphawarr1or> oh I see thanks
07:49:24 <Vercas-web> Meltdown uses speculative execution and out-of-order execution to achieve the same result.
07:49:41 <alphawarr1or> oh so they are variation of the same attack?
07:49:48 <amigara> (tbh cache coloring when)
07:50:07 <Vercas-web> No, they're different exploits with a common mechanism: speculative execution.
07:50:37 <Vercas-web> Many people put them in the same pot because they both abuse speculative execution.
07:50:48 <Vercas-web> amigara: PCID
07:51:14 <Vercas-web> And VPID.
07:51:20 <alphawarr1or> speculative execution is when the cpu prefetches data for the next instruction?
07:51:30 <Vercas-web> alphawarr1or: No, that's prefetching. :P
07:51:34 <amigara> But at what cost!??!
07:51:41 <Vercas-web> amigara: Huh?
07:51:53 <amigara> (it's still expensive)
07:52:01 <amigara> All things considered
07:52:16 <Vercas-web> alphawarr1or: Speculative execution makes the CPU actually execute (parts of) instructions ahead of the "current" instruction.
07:52:45 <alphawarr1or> oh so we issue a read in a branch and even if it's not taken it'll be read?
07:53:13 <Vercas-web> alphawarr1or: Pretty much, *if the branch predictor thinks the branch will be taken*.
07:54:02 <alphawarr1or> oh I see thanks ^^
07:54:07 <Brnocrist> https://twitter.com/mlarkin2012/status/949902956496789505
07:54:08 <bslsk05> ​twitter: <mlarkin2012> Does anyone know if Intel CPUs speculatively read PML4Es if CR3.PWT/CR3.PCD say a PAT type of UC (uncached) is to be used? And if no, could meltdown be fixed by flushing the TLB on return to usermode and setting CR3 as above? (or would performance suck more than the current fix)?
07:54:48 <Vercas-web> That won't work.
07:55:53 <alphawarr1or> so this attack relies on the cpu preexecuting the reads and it issues a read into a kernel space address and the cpu will prefect data for it but since we cannot read it'll fail but the data will still be present in cache then it reads it with a timing attack?
07:57:31 <Brnocrist> Vercas-web: why
07:58:01 <rain1> I stil dont get how a userspace application can refer to a kernel memory address... doesn't the paging translate a kernel address into "not a real address" or something
07:58:55 <Brnocrist> yes
07:59:10 <Brnocrist> but the speculative path still loads it in the cache
07:59:17 <alphawarr1or> yeah that's my problem too @rain1 esp since we have address space randomization
07:59:49 <Brnocrist> and also the #PF should never happen
08:01:22 <Vercas-web> rain1: No, it doesn't. It's a very real address. :P
08:01:38 --- join: multi_io (~olaf@x4db407c3.dyn.telefonica.de) joined #osdev
08:01:39 <rain1> can you teach m eabout this? I dont get it
08:01:51 <Vercas-web> Issue here is that speculative execution paths don't check if a micro-op's results are due to trigger an exception or not.
08:02:11 <alphawarr1or> btw if we could fix the cache timing attack then could we also fix these?
08:02:37 <rain1> i don't tihnk you can "fix" cache timing, the whole point of the cache is that it makes things faster
08:02:40 <Brnocrist> it doesn't happen for real in a normal execution path
08:02:43 <rain1> so i feel like the fix should be elsewhere
08:02:43 <Vercas-web> The only "fix" for cache timing attacks is stopping userland from doing precise timing.
08:03:09 <alphawarr1or> oh but again that would make things slower...
08:03:18 --- quit: oaken-source (Ping timeout: 276 seconds)
08:03:28 <Brnocrist> if the speculative path guessed by CPU is not right it is just unwind, so nothing happened, expect the load of this address in L1 cache
08:03:32 <Vercas-web> Brnocrist: To answer your `why` about the Tweet, changing PML4 cache behaviour will do literally nothing useful. About the second statement, it doesn't account for multithreading.
08:03:49 <rain1> wish someone could explain the thing about kernel addresses in userspace
08:03:50 <Vercas-web> alphawarr1or: It would brick all multimedia applications.
08:04:05 <Brnocrist> Vercas-web: it just caches the PML4?
08:04:20 <Vercas-web> Brnocrist: Whether or not it caches the PML4 is irrelevant.
08:04:22 <alphawarr1or> vercas-web: because they rely on fast cache?
08:04:31 <Vercas-web> alphawarr1or: They rely on precise timing.
08:04:43 <alphawarr1or> oh
08:04:50 <Brnocrist> Vercas-web: well, right on PML4 itself
08:04:52 <Vercas-web> The PWT and PCD bits in paging tables are pretty much never used, only in PTEs.
08:05:09 --- quit: multi_io_ (Ping timeout: 265 seconds)
08:05:50 <Vercas-web> There's no good reason in practice to have uncached paging tables. Write-through paging tables are just useless. Also, these don't affect TLBs.
08:06:26 <alphawarr1or> vercas-web: what do think how much speed will we lose with patching this?
08:06:35 <Brnocrist> Vercas-web: it makes sense on CoW?
08:06:49 <Vercas-web> Brnocrist: Huh?
08:07:29 <Vercas-web> alphawarr1or: Not speed, just accuracy.
08:07:39 <Brnocrist> ah write-through, I misread it
08:08:08 <alphawarr1or> accuracy? you mean we'll have more TLB misses?
08:08:26 <Vercas-web> No, I mean accuracy of timing...
08:09:08 <Vercas-web> Audio and video playback require accurate timing in order to hide latencies from hoomans.
08:10:10 <Vercas-web> e.g. if you're watching a video and the audio arrives more than 5 ms late, or 10 ms too soon, you're gonna get really annoyed, really quick.
08:10:22 <Vercas-web> Looking at people
08:10:23 <alphawarr1or> well yeha that's true
08:10:38 <Vercas-web> ... people's lips while they speak is the worst experience in the world with audio delays.
08:11:14 <alphawarr1or> and then the video will start to lag too...
08:12:32 <Vercas-web> There are plenty of good ways to do accurate timing, but RDTSC is by far the quickest in userland.
08:13:11 <alphawarr1or> RDTSC read the amount of CPU cyclesright?
08:14:31 <amigara> And, depending on the attack, one can get granular enough timing info from an inc
08:15:13 <amigara> or xadd
08:16:13 --- quit: sprocklem (Ping timeout: 252 seconds)
08:17:24 --- quit: Belxjander (Ping timeout: 256 seconds)
08:18:24 <Levex> alphawarr1or: yeah, it's ReaDs the Time Stamp Counter
08:18:36 <Levex> ... which is usually the number of cycles since reset
08:18:47 <Vercas-web> alphawarr1or: The TSC is really not that in practice ^
08:19:00 <alphawarr1or> oh what does it count in practice then?
08:19:09 <Vercas-web> Most often it runs at its own frequency - typically a very high one.
08:19:10 --- join: svk (~svk@p2003006A650D8E004C1F7179C178EC5A.dip0.t-ipconnect.de) joined #osdev
08:19:18 <alphawarr1or> oh higher than the clock speed?
08:19:26 <Vercas-web> Sometimes, yes.
08:19:40 <alphawarr1or> oh I see thanks
08:19:48 <alphawarr1or> I guess I have to learn more about low lever stuff
08:19:53 <Vercas-web> It's *some* fixed frequency most often.
08:20:07 <Vercas-web> Also, keep in mind that most modern CPUs let you clock each individual core however you want.
08:20:09 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
08:20:21 <alphawarr1or> oh so even if the CPU has dynamic frequency it'll count differently?
08:20:21 <Levex> with speculative execution the previous topic, maybe it's worth mentioning RDTSCP
08:20:31 <Vercas-web> You can have, on the same processor, one core running at 1.5 GHz and one running at 4 GHz.
08:20:53 <alphawarr1or> that's cool
08:21:01 <Vercas-web> alphawarr1or: Well, since threads can migrate accross cores, the TSC *has* to be in sync.
08:21:16 <alphawarr1or> oh so the TSC is global?
08:21:18 <Vercas-web> Older AMDs didn't do that. And ended up needing some tool on Windows in order to fix it.
08:21:30 <Vercas-web> Correct.
08:21:53 <alphawarr1or> oh how about in vms? does the vm see the host's TSC or is it virtualized too with VT-x?
08:22:15 <Vercas-web> alphawarr1or: Entirely up to the hypervisor!
08:22:22 <alphawarr1or> oh wow
08:22:36 <alphawarr1or> well I've tried in qemu before but it was kind unreliable...
08:22:45 <Vercas-web> Qemu with KVM?
08:22:56 <alphawarr1or> I guess no wait let me see
08:23:13 <Vercas-web> Then it'll most likely be counting instructions.
08:23:29 <alphawarr1or> I didn't set kvm for qemu
08:23:56 <Vercas-web> Then your TSC is rather useless.
08:24:32 <Vercas-web> Qemu project has got a lot of great devices, but my experience tells me that its timers are unuseable for any real timing.
08:24:46 <alphawarr1or> oh I see
08:24:54 <alphawarr1or> which vm has a reliable one?
08:25:04 <Vercas-web> VMWare, definitely.
08:25:12 <alphawarr1or> I wanted to see how long it takes for my interrupt code to finish
08:25:30 <Vercas-web> Never had issues with VMWare's timers. Xen didn't fail me either.
08:25:36 <alphawarr1or> oh well vmware is like a black box: it doesn't show any screen on my pc but virtualbox works
08:25:46 <Vercas-web> VBox failed me.
08:25:57 <alphawarr1or> you mean it's timer?
08:26:18 <Vercas-web> Yes, they are, for all practical purposes, random.
08:26:33 <alphawarr1or> well I use vmware at home but it woN't work for some reason on my laptop...
08:26:43 <gamozo> most hypervisors just directly pass through the TSC
08:26:53 <Vercas-web> VMWare has a per-guest setting for this.
08:27:24 <gamozo> however they can A. intercept the rdtsc instruction and hook this B. apply an offset to the rdtsc via VM stuff they can change or C. apply a scaler to the rdtsc rate
08:27:28 <gamozo> or any combination of these 3
08:27:33 <gamozo> but really no hypervisors do this by default
08:27:37 --- quit: aesycos (Quit: *poofs into confetti*)
08:27:47 <Vercas-web> It's best to just pass through, to be honest.
08:28:18 <alphawarr1or> oh but then howcan I see how long something took?
08:28:18 <gamozo> apply some jitter at least ;D throw off some of these attacks
08:28:20 <Vercas-web> Virtualizing the TSC is a minefield if there's more than one vCPU.
08:28:26 <gamozo> alphawarr1or: you cant
08:28:29 <Vercas-web> alphawarr1or: Performance counters!
08:28:38 <alphawarr1or> what are those? aren't TSC one?
08:28:40 <gamozo> performance counters are ring0 only, at that point you're kinda toast
08:28:46 --- quit: silas (Quit: leaving)
08:28:57 <alphawarr1or> well it's ring 0 code that I wanna test
08:29:01 <gamozo> oh
08:29:03 <alphawarr1or> interrupt handlers
08:29:05 <gamozo> yeah, perf counters are fine then
08:29:12 <Vercas-web> alphawarr1or: Well, TSC can be used to measure performance, but CPUs have much better tools for this.
08:29:19 <Vercas-web> Intel PTRACE, for instance.
08:29:22 <alphawarr1or> oh wow
08:29:36 <gamozo> ptrace just tells you what code was hit
08:29:40 <Vercas-web> Whut.
08:29:44 <alphawarr1or> isn't PTRACE a linux functions?
08:29:46 <Vercas-web> It tells you everything you ask it to.
08:29:58 <Vercas-web> There's buttloads of settings.
08:30:09 <gamozo> there are a bunch of filters
08:30:14 <Vercas-web> alphawarr1or: Linux implements ptrace and they don't rename it.
08:30:25 <gamozo> but IIRC it is only branches and (optionally) time stamp counters logged
08:30:36 <Vercas-web> gamozo: And a bunch of data that you can include in your packets. One of them is timing.
08:30:42 <alphawarr1or> oh so what is PTRACE then? is it an instruction?
08:30:45 <gamozo> yeah, the timing is just rdtsc though ;P
08:30:50 <Vercas-web> alphawarr1or: A feature.
08:31:08 <Vercas-web> gamozo: Yes, however they're deltas, if I recall correctly.
08:31:19 <gamozo> yeah, I don't remember
08:31:22 <Vercas-web> And *those* can be virtualized properly, to exclude VM exit and re-entry times.
08:31:27 <gamozo> i read it once, got excited, then read that it's not virtualized
08:31:35 <gamozo> i think in skylakes it now can be done in VMs
08:31:43 <gamozo> but before then it was not available under VT-x
08:31:48 <Vercas-web> VMWare has had that settings for years...
08:31:50 <Levex> I think it's called Intel PT, not PTRACE. avoiding confusiong with ptrace(2) of linux
08:32:04 <gamozo> people say PT yeah
08:32:23 <gamozo> PT on desktop CPUs has been usable in a VM for a bit
08:32:31 <alphawarr1or> since when is this a feature?
08:32:33 <gamozo> PT on server CPUs I think was only this summer with the new CPUs
08:32:46 <gamozo> spec came out like 4-5 years ago, silicon showed up 2-3 years ago
08:33:02 <alphawarr1or> oh so it's not on my CPU
08:33:05 <gamozo> it's pretty universally in silcon as of 1 year ago (server and desktop cpus of various perf levels)
08:33:23 <Vercas-web> Eh, my Haswells support processor tracing.
08:33:37 <gamozo> yeah, it goes back a bit if you're not using it in a VM
08:34:03 <Vercas-web> Imma try to add it to my OS eventually, to test inside VMWare.
08:34:11 <Vercas-web> I really don't see why it wouldn't work inside VMs.
08:34:20 <gamozo> like they physically don't support it
08:34:41 <alphawarr1or> well my poor g540 came out in 11q3 and my little i5-2520m cam out in 11q1
08:34:49 <alphawarr1or> so like 7 years ago
08:35:46 <gamozo> "Initial implementations of Intel Processor Trace do not support tracing in VMX operation. Such processors indicate
08:35:49 <gamozo> this by returning 0 for IA32_VMX_MISC[bit 14]. On these processors, execution of the VMXON instruction clears
08:35:52 <gamozo> IA32_RTIT_CTL.TraceEn and any attempt to write IA32_RTIT_CTL in VMX operation causes a general-protection
08:35:55 <gamozo> exception (#GP)"
08:35:58 <gamozo> section 35.2.8.4 in the system dev manual
08:36:11 <gamozo> I'm pretty sure this was skylake where this changed, but of course, check your IA32_VMX_MISC bit 14 to be sure
08:36:28 <Vercas-web> I see.
08:36:43 <Vercas-web> There's also the PMU...
08:37:10 <gamozo> I personally just do random sampling for my profiling, usually it's good enough
08:37:15 --- quit: robert_ (Read error: Connection reset by peer)
08:37:16 <gamozo> and it's just so generic and portable
08:37:22 <alphawarr1or> PMU?
08:37:29 <gamozo> sure you can miss rare events, but do you really care about rare events in perf?
08:37:33 --- join: robert_ (~hellspawn@24.96.111.33) joined #osdev
08:37:33 --- quit: robert_ (Changing host)
08:37:33 --- join: robert_ (~hellspawn@objectx/robert) joined #osdev
08:37:34 <gamozo> debugging, sure full traces are great
08:37:36 <Vercas-web> Performance Monitoring Unit. Available since Pentium.
08:37:46 <gamozo> it's where the perf counters are
08:37:55 --- join: azonenberg_work (~azonenber@172.58.40.146) joined #osdev
08:37:57 <gamozo> you get like 4-16 different counters that you can program to count different events
08:38:00 <alphawarr1or> oh I see
08:38:06 <alphawarr1or> that sounds neat
08:38:14 <gamozo> these events vary largely by the uarch, there are like 4 different onces that are universally supported since pentium
08:38:28 <gamozo> but you can say things like "increment this counter every time you access data and you miss L1 and L2 and have to hit L3/memory"
08:38:36 <alphawarr1or> wow
08:38:37 <gamozo> or "increase this counter for every instruction retired"
08:38:42 <alphawarr1or> how can you code that?
08:38:43 <gamozo> which is much different than the tsc
08:38:47 <gamozo> it's pretty simple
08:39:04 <alphawarr1or> read the intel developers guide right?
08:39:04 <Vercas-web> And excellent for micro-optimizing hot paths.
08:39:05 <gamozo> set an MSR that has a bit that enables it, and a few bits dedicated to the performance counter ID that determines the event to count on
08:39:14 <gamozo> and a count register you can write to reset the counter (or preprogram to some value)
08:39:32 <gamozo> and an instruction `rdpmc` to read these results from the counters quickly rather than having to `rdmsr` the counter MSR (however both work)
08:40:09 <gamozo> using a handful of them usually can give you a great idea of how your code is working, what your bottlenecks are, etc
08:40:22 <alphawarr1or> oh wow
08:40:23 <gamozo> it wont tell you _where_ in your code the bottlenecks are, just a general "hey your code is stuck on hitting memory"
08:40:38 <Vercas-web> Also, the PMU part of the manual pretty much states plainly that the TSC runs at fixed frequencies. Hm.
08:40:46 <alphawarr1or> I guess i'll have to find the corresponding parts in the intel manuals...
08:41:01 <Vercas-web> alphawarr1or: The Intel manuals are a fun read. :P
08:41:07 * Vercas-web has got to go.
08:41:10 --- quit: Vercas-web (Quit: Page closed)
08:41:29 <gamozo> you can also have an interrupt fire (configed by the APIC) when the performance counter overflows, and since you can also set this counter (it's a 48-bit value IIRC) in theory you can set the counter to MAX_COUNTER_VALUE-1, and the next time that event occurs you get an interrupt
08:41:31 <alphawarr1or> well i've tried to read them before... I couldN't make any sense out of it
08:41:54 <gamozo> I use this to get interrupts on the code that specifically is causing a counter (and I pick a counter of an event I want to know if it happens. like hitting memory in something that should be cache optimized)
08:42:26 <gamozo> I use this super heavily, however it isn't guaranteed that the next increment causes the interrupt where the actual event occured, there's some jitter and delay so you can't rely on it for perfect instrumentation
08:42:34 <gamozo> however it's great for roughly "where in my code did this happen"
08:42:36 --- join: Asu (~sdelang@AMarseille-658-1-33-119.w86-219.abo.wanadoo.fr) joined #osdev
08:43:07 --- quit: bm371613 (Quit: Konversation terminated!)
08:43:08 <gamozo> I really should graph the distance from the instruction which caused the perf counter to increment and the instruction i actually get
08:43:20 <gamozo> i have no idea if it's 1000 instructions after the counter overflows, or like... 5
08:43:40 <gamozo> I think there's an intel manual specifically for perf counters
08:44:24 --- quit: Asu (Remote host closed the connection)
08:44:39 <bcos> For "extreme detail", performance monitoring counters typically run into the observer effect
08:44:43 <bcos> ( https://en.wikipedia.org/wiki/Observer_effect_(physics) )
08:44:43 <bslsk05> ​en.wikipedia.org: Observer effect (physics) - Wikipedia
08:44:44 --- join: Asu (~sdelang@AMarseille-658-1-33-119.w86-219.abo.wanadoo.fr) joined #osdev
08:44:48 <gamozo> ah that's jsut the uncore performance counters
08:44:52 <gamozo> https://software.intel.com/en-us/articles/intel-sdm
08:44:53 <bslsk05> ​software.intel.com: Intel® 64 and IA-32 Architectures Software Developer Manuals | Intel® Software
08:45:01 <gamozo> scroll down to "Uncore Performance Monitoring Reference Manuals"
08:45:08 <gamozo> there's a lot of really cool things outside of the CPU you can monitor too
08:45:11 <gamozo> very CPU specific though
08:45:39 <gamozo> bcos: yeah, only useful for single shots where things are unhindered beforehands
08:46:25 <gamozo> omg this poor disk just is getting brutalized by this build. I really really need SSDs
08:46:43 <bcos> Need more RAM for larger disk caches..
08:46:51 <gamozo> I've got 512 GiB of ram
08:46:56 <gamozo> 420 GiB of which is waiting to be flushed to dissk
08:47:08 <bcos> 512 GiB isn't enough for C++ templates! :-)
08:47:10 <gamozo> the buiild has effectively halted at this point
08:47:17 <gamozo> 5% CPU use :(
08:47:41 <gamozo> hopefully this build finishes in a few days, so then I can order adequate SSD space
08:47:50 <gamozo> I'm guessing it'll be 2-5 TiB for the build
08:48:39 --- quit: heat (Ping timeout: 265 seconds)
08:48:56 <gamozo> what I really need is some EPYCs with the CPU-pcie-raid for NVMe drives
08:49:00 <xenos1984> gamozo: what are you building? o.O emacs?
08:49:10 <gamozo> all of windows debug build with no optimizations ;D
08:49:19 <xenos1984> ah, nice
08:49:53 <gamozo> i get nice warnings every few min saying how unsupported this build is and how the kernel team will ignore any bugs I report from this kernel :P
08:50:04 --- join: chjk6x (~chjk6x@160.176.67.195) joined #osdev
08:50:31 <gamozo> anyways, yeah this AMD raid stuff (Intel apparently has it too but you have to pay for it and unlock it with a key like CPU DLC)
08:50:40 <gamozo> linear scaling of raids, CPU accelerated stuff for NVMe
08:50:54 <gamozo> https://www.pcper.com/news/General-Tech/AMD-Releases-NVMe-RAID-Support-X399-Threadripper-Platform
08:50:56 <bslsk05> ​www.pcper.com: AMD Releases NVMe RAID Support for X399 Threadripper Platform | PC Perspective
08:51:03 <gamozo> perfect linear scaling for their 6x raid0 setup
08:51:09 <gamozo> 21 GiB/s lmao
08:51:52 --- join: navidr (uid112413@gateway/web/irccloud.com/x-znfyhvgccctwifoo) joined #osdev
08:52:21 --- join: Asu` (~sdelang@AMarseille-658-1-33-119.w86-219.abo.wanadoo.fr) joined #osdev
08:56:22 --- quit: Asu (Ping timeout: 248 seconds)
09:06:08 <alphawarr1or> https://www.intel.com/content/dam/www/public/us/en/documents/manuals/6th-gen-core-family-uncore-performance-monitoring-manual.pdf is this the one I'm looking for?
09:08:10 --- quit: graphitemaster (Quit: ZNC - http://znc.in)
09:08:59 --- join: graphitemaster (~graphitem@unaffiliated/graphitemaster) joined #osdev
09:10:50 <rain1> so page table permissions can no longer be relied on
09:11:37 <gamozo> alphawarr1or: no, that's for uncore
09:11:44 <gamozo> you'll just want the standard system programmers manual
09:12:18 <rmf1723> https://software.intel.com/en-us/articles/intel-sdm
09:12:22 <rmf1723> This, volume 3
09:12:40 <rmf1723> Volume 2 is also useful sometimes.
09:13:23 <alphawarr1or> volume 3 A, B, C or D?
09:13:41 <alphawarr1or> oh it's B
09:14:31 --- quit: azonenberg_work (Ping timeout: 264 seconds)
09:14:44 <alphawarr1or> oh damn so many sensors
09:14:59 <rmf1723> Oh, I just get the combine vol3 PDF, dunno which part is which
09:15:25 <alphawarr1or> oh I see
09:15:41 <alphawarr1or> vol 3B says "Continues the coverage on system programming subjects begun in volume 3A. Volume 3B covers thermal and power management features, debugging, and performance monitoring."
09:16:10 --- join: freakazoid0223 (~IceChat9@pool-108-52-4-148.phlapa.fios.verizon.net) joined #osdev
09:17:55 <gamozo> just get the section 3 combined manual
09:23:16 <rmf1723> Hmm, the UEFI spec says that EfiBootServicesData is "Memory available for general use." after calling ExitBootServices, but it seems that my EFI application RSP is within EfiBootServicesData, as opposed to EfiLoaderData as I would expect ("Note: the
09:23:19 <rmf1723> OS loader that called
09:23:21 <rmf1723> ExitBootServices()
09:23:23 <rmf1723> is utilizing one or
09:23:25 <rmf1723> more
09:23:27 <rmf1723> EfiLoaderData
09:23:29 <rmf1723> ranges.")
09:23:31 <rmf1723> Shit, sorry for that mess.
09:23:42 <graphitemaster> copying from PDFs suck :P
09:24:40 --- join: gdh (~gdh@2605:a601:639:2c00:e0d9:bf28:2735:8659) joined #osdev
09:24:54 <rmf1723> This means that after ExitBootServices, I can't trash EfiBootServicesData. But I also have no guarantee that that's where the RSP will be. Is this a bug in the firmware (it's OVMF), or am I missing something in the spec?
09:25:21 --- join: oaken-source (~oaken-sou@mue-88-130-48-085.dsl.tropolys.de) joined #osdev
09:26:03 --- nick: retpolin_ -> retpoline
09:26:16 <rmf1723> By trash, I mean that when I load a new CR3 it cannot be left unmapped. Which also contradicts e.g. what the spec says about SetVirtualAddressMap.
09:26:57 <rmf1723> (As it only requires mappings for memory with Runtime attribute, which EfiBootServicesData ranges don't have)
09:27:20 --- quit: John___ (Read error: Connection reset by peer)
09:32:16 --- join: m3nt4L (~asvos@2a02:587:a019:8300:3285:a9ff:fe8f:665d) joined #osdev
09:32:41 --- quit: m3nt4L (Client Quit)
09:35:14 <doug16k> gamozo, what takes 512GB to build?
09:35:48 --- join: dshin_ (uid10647@gateway/web/irccloud.com/x-jrpnnfxznwwtwlsy) joined #osdev
09:36:34 <doug16k> graphitemaster, if you use okular to read pdfs, it has an option to ignore the restrictions :)
09:37:16 <doug16k> and in general the text copy does the right thing
09:37:43 <doug16k> maybe not every time, but practically every time. maybe OCR'd mess documents could be an issue
09:41:18 <doug16k> gamozo, ah, windows. yeah I mentioned before that a "normal" machine can't do that :D
09:42:04 <sham1> Hi
09:42:48 <doug16k> rmf1723, ^^ see ocular comment, I mistakenly pinged graphitemaster
09:44:54 --- quit: oaken-source (Ping timeout: 248 seconds)
09:45:10 <gamozo> apparently a good NVMe SSD + 32 GiB is what they recommend
09:45:28 <gamozo> but they also don't recommend anyone build all of windows ,just what they are working on :P
09:45:56 <gamozo> Damn, just called a salesman for servers and he was saying EPYC is a failure as they have no units
09:46:03 <gamozo> apparently AMD keeps delaying and nobody is getting stock
09:46:15 <gamozo> a bit harsh, but the availability is garbage
09:52:28 <gamozo> perf/watt looks better on Intel, AVX-512 is great and might be required for me in the future, EPT is more fully featured than SVM's NPT
09:52:33 --- join: azonenberg_work (~azonenber@74.85.93.91) joined #osdev
09:52:46 <gamozo> but AMD has better docs, destroys intel in perf/$, and SVM is a much cleaner interface
09:54:38 --- part: gdh left #osdev
09:54:40 <doug16k> nice, my qemu nvme tracing patch is in master now
09:57:34 <gamozo> fix or feature?
09:57:38 <doug16k> feature
09:58:22 <doug16k> the whole thing. there was zero tracing and no reporting whatsoever of guest errors before
09:59:21 <doug16k> my patch detects every case of UB (guest error) I could find, and gives detailed error messages in every error case
10:01:18 <clever> nice
10:02:07 <doug16k> it also reports some success stuff to the log, but it's mostly UB/error tracing, all named such that you can use wildcards to just get UB (-trace nvme_ub*) or errors (-trace nvme_err*) or everything (-trace nvme_*), or a combination with multiple trace options
10:02:39 <doug16k> -trace nvme_op* for successes IIRC
10:03:37 <doug16k> UB also goes to the -d guest_error log, so even if you are clueless about tracing you get the reports for bad stuff
10:03:51 --- join: srjek (~srjek@2601:249:601:9e9d:edbb:7600:e9c6:4999) joined #osdev
10:06:36 --- quit: Humble (Ping timeout: 252 seconds)
10:09:45 --- join: user10032 (~Thirteen@90.209.104.11) joined #osdev
10:09:58 --- quit: azonenberg_work (Ping timeout: 240 seconds)
10:11:27 --- join: sinetek_ (~sinetek@modemcable018.210-57-74.mc.videotron.ca) joined #osdev
10:15:51 --- join: Asu (~sdelang@92.184.97.120) joined #osdev
10:16:24 --- quit: Asu` (Ping timeout: 248 seconds)
10:18:59 --- join: Humble (~hchiramm@2405:204:5706:85a4:aee7:1f9d:2c3e:197f) joined #osdev
10:24:22 --- join: uvgroovy_ (~uvgroovy@199.188.233.130) joined #osdev
10:25:56 --- quit: uvgroovy (Ping timeout: 246 seconds)
10:25:56 --- nick: uvgroovy_ -> uvgroovy
10:31:09 --- join: oaken-source (~oaken-sou@p3E9D2B74.dip0.t-ipconnect.de) joined #osdev
10:34:00 --- quit: Belxjander (Ping timeout: 248 seconds)
10:36:49 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
10:37:02 <gamozo> ah cool
10:38:49 --- quit: svk (Quit: Leaving)
10:41:55 --- join: pictron (~tom@pool-173-79-33-247.washdc.fios.verizon.net) joined #osdev
10:51:08 --- quit: rain1 (Quit: WeeChat 2.0.1)
10:51:51 --- quit: AnyTimeTraveler (Quit: AnyTimeTraveler)
10:54:26 --- join: AnyTimeTraveler (~quassel@titan-systems.net) joined #osdev
10:54:27 --- join: heat (~heat@sortix/contributor/heat) joined #osdev
10:58:28 <heat> rmf1723: When you're mapping/unmapping EFI ranges(and have a new address space) you shouldn't be using the stack given by the firmware
10:59:47 <heat> Should look like this: ExitBootServices() -> jmp to kernel entry -> setup kernel mappings -> set a new bootstrap stack
11:00:14 --- quit: quc (Ping timeout: 246 seconds)
11:00:53 <heat> If you dig in the UEFI spec you'll probably find something that tells you that you shouldn't rely on the stack provided by the firmware after ExitBootServices
11:01:03 <geist> yah
11:01:25 <geist> it's sitting off in one of the now unused regions of the memory map
11:02:48 --- quit: oaken-source (Ping timeout: 248 seconds)
11:03:05 <heat> after ExitBootServices() you shouldn't really rely on things like that, they should be set up by you ASAP
11:06:39 <heat> In a way, you should think of the UEFI address space post ExitBootServices() as a mapping that's only there because you might need it when you call the runtime services
11:06:49 <doug16k> yeah, you should switch to a stack you control as soon as possible when you get control. this applies everywhere, not just UEFI
11:06:54 <heat> Other than that, you shouldn't use them for anything
11:07:53 --- join: rain1 (~user@unaffiliated/rain1) joined #osdev
11:08:20 <geist> that being said you should also grab the memory map from EFI before exiting, then you can figure out what is in use
11:08:30 <geist> it's basically an über e820
11:08:47 <geist> in fact you have to, iirc. you have to pass the handle to the current memory map to exitbootservices
11:09:46 <heat> yeah
11:09:48 --- join: sgautam (~sgautam@59.182.248.0) joined #osdev
11:09:51 --- nick: sgautam -> GautamS
11:10:37 <heat> what's pretty cool is that afaik you can set your own memory types so you can virtually find things you need with the mmap without the bootloader passing them in a register or structure
11:11:41 <geist> oh that's a good point. you still hve to pass the memory map itself, but aside from that you can essentially tag stuff you allocaed in EFI with that
11:11:47 <geist> never occurred to me
11:12:36 <rain1> https://groups.google.com/forum/#!topic/minix3/RMQgCbHxlX4 it is likely that MINIX is vulnerable, too
11:12:40 <bslsk05> ​groups.google.com: Google Groups
11:12:46 <rmf1723> Oh, you're right.
11:13:05 --- join: paranoidandroid_ (58e6f768@gateway/web/freenode/ip.88.230.247.104) joined #osdev
11:13:08 <rmf1723> Memory types with the high bit set are for OS loaders to use as they please.
11:13:12 <rmf1723> I should use that.
11:14:05 --- quit: paranoidandroid_ (Client Quit)
11:15:02 <rain1> but the intel quark chip that runs IME probably doesnt have out of order execution
11:15:22 <heat> it used a 486 iirc
11:16:19 <geist> also early atoms dont appear to be OOO enough to have any of these effects too
11:16:36 <heat> huh, it uses a quark but I swear they used to use 486's?
11:16:38 <geist> Bonnell based ones. i think the later Goldmont may be sophisticated
11:16:58 <geist> from what I had heard they were using sparc until fairly recently
11:17:09 <geist> like last 5 years, but then IME was probably only implemented in server chipsets at that point
11:18:00 <heat> that feeling when the i486 was produced from 1989 to 2007
11:19:46 <heat> Heh, looks like I was wrong, dunno where I got that from
11:20:20 <geist> yeah it sounded like some sort of internet rumor. i dont think anyone knows at all what core the IME runs on, or if it's custom
11:20:49 <geist> could be the same thing that larrabee was based on? or knights landing? clearly they have some older cores floating around that they can repurpose, but its unclear which one it is
11:24:43 --- quit: chjk6x (Ping timeout: 260 seconds)
11:26:07 <Kazinsal> Today's example of "parallel Make is a shitshow": on one invocation, NASM spat out "phase error detected at end of assembly", choked, and I cannot reproduce it.
11:26:46 <geist> aww, parallel make works fine. folks that implement non parallel safe make files, that's like writing multithreaded code without any locks
11:26:59 <geist> still humans fault
11:27:20 <Kazinsal> Yeah I have a feeling that's the case
11:29:16 --- quit: glfernando (Remote host closed the connection)
11:31:07 --- join: mechanist2 (~mike@207.244.71.98) joined #osdev
11:31:32 --- quit: [Brain] (Ping timeout: 265 seconds)
11:32:26 --- join: aosfields_ (~aosfields@71.194.3.30) joined #osdev
11:33:05 --- join: sprocklem (~sprocklem@unaffiliated/sprocklem) joined #osdev
11:38:24 --- quit: sinetek_ (Quit: This computer has gone to sleep)
11:44:06 --- quit: GautamS (Quit: GautamS)
11:50:46 --- quit: daniele_athome (Ping timeout: 248 seconds)
11:51:38 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
12:00:20 --- join: quc (~quc@host-89-230-168-166.dynamic.mm.pl) joined #osdev
12:03:55 <XgF> In practice you can't use those memory types with the top bit set without crashing your UEFI
12:04:08 <XgF> or at least ,crashing some people's UEFIs
12:09:41 --- join: chjk6x (~chjk6x@160.176.67.195) joined #osdev
12:09:56 --- join: sortie (~sortie@static-5-186-55-44.ip.fibianet.dk) joined #osdev
12:11:16 --- quit: sortie (Client Quit)
12:18:14 --- join: sortie (~sortie@static-5-186-55-44.ip.fibianet.dk) joined #osdev
12:18:23 --- quit: sortie (Client Quit)
12:21:54 --- quit: sprocklem (Ping timeout: 240 seconds)
12:29:31 --- quit: voidah (Ping timeout: 264 seconds)
12:31:18 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
12:31:20 --- join: glfernando (glfernando@nat/google/x-jbwhiomymchopmnu) joined #osdev
12:31:59 --- quit: mechanist2 (Remote host closed the connection)
12:32:26 --- join: mechanist2 (~mpascale@207.244.71.98) joined #osdev
12:32:52 --- join: Pyjong (~pi@14-232-24-185.static.servebyte.com) joined #osdev
12:33:21 --- join: chjk6x_ (~chjk6x@adsl196-127-86-206-196.adsl196-3.iam.net.ma) joined #osdev
12:33:46 --- join: immibis (~chatzilla@122-59-200-50.jetstream.xtra.co.nz) joined #osdev
12:36:20 --- quit: chjk6x (Ping timeout: 240 seconds)
12:37:55 --- quit: Belxjander (Ping timeout: 264 seconds)
12:39:26 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
12:41:34 --- join: John___ (~John__@79.97.140.214) joined #osdev
12:43:14 --- nick: Asu -> Asu`afk
12:43:16 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
12:44:06 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
12:44:20 --- join: voidah (~voidah@unaffiliated/voider) joined #osdev
12:48:37 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
12:54:16 --- join: bm371613 (~bartek@89-64-31-161.dynamic.chello.pl) joined #osdev
12:59:26 --- quit: m_t (Quit: Leaving)
13:13:13 --- quit: nzoueidi (Ping timeout: 260 seconds)
13:16:49 --- join: oshogbo (~oshogbo@80-219-208-230.dclient.hispeed.ch) joined #osdev
13:19:26 --- quit: vmlinuz (Quit: Leaving)
13:20:21 --- join: jakogut (~jakogut_@162.251.69.147) joined #osdev
13:21:00 --- join: farb (~farb@unaffiliated/farb) joined #osdev
13:22:26 --- join: uvgroovy_ (~uvgroovy@199.188.233.130) joined #osdev
13:22:50 --- quit: aosfields_ (Ping timeout: 240 seconds)
13:22:51 --- join: Shamar (~giacomote@unaffiliated/giacomotesio) joined #osdev
13:23:50 --- quit: uvgroovy (Ping timeout: 240 seconds)
13:23:50 --- nick: uvgroovy_ -> uvgroovy
13:26:04 --- join: sortie (~sortie@static-5-186-55-44.ip.fibianet.dk) joined #osdev
13:27:01 --- quit: eivarv (Quit: Sleep)
13:32:06 --- join: tacco\unfoog (~tacco@dslb-178-007-244-013.178.007.pools.vodafone-ip.de) joined #osdev
13:35:28 --- quit: vaibhav (Quit: Leaving)
13:38:50 --- part: jakogut left #osdev
13:38:53 --- quit: Pyjong (Ping timeout: 260 seconds)
13:39:33 --- join: sprocklem (~sprocklem@unaffiliated/sprocklem) joined #osdev
13:40:06 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
13:43:48 --- join: xerpi (~xerpi@238.red-83-45-192.dynamicip.rima-tde.net) joined #osdev
13:44:47 --- quit: xerpi (Remote host closed the connection)
13:45:00 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
13:45:08 --- join: xerpi (~xerpi@238.red-83-45-192.dynamicip.rima-tde.net) joined #osdev
13:45:28 --- quit: Belxjander (Ping timeout: 240 seconds)
13:47:03 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
13:49:35 --- quit: glauxosdever (Quit: leaving)
13:49:55 --- quit: Halofreak1990 (Ping timeout: 264 seconds)
13:52:31 <heat> 29 files changed, 244 insertions(+), 883 deletions(-)
13:52:36 <heat> Thicc commit
13:52:37 <sortie> Woot
13:52:49 <_mjg> license change
13:52:50 --- quit: fnodeuser (Ping timeout: 240 seconds)
13:52:53 <heat> no
13:52:59 <_mjg> dabg
13:53:00 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
13:53:23 --- join: fnodeuser (~irssiuser@ppp-2-86-205-194.home.otenet.gr) joined #osdev
13:53:52 <heat> I deleted my not-so-good dynamic linker and adapted musl's ldso while making changes to the kernel
13:54:29 <heat> resulted in a working-ish dynamic linker + less code
13:54:34 --- quit: bm371613 (Quit: Konversation terminated!)
13:54:42 <heat> and now it's all together in a single .so
14:02:59 --- quit: navidr (Quit: Connection closed for inactivity)
14:06:16 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
14:10:46 --- quit: farb (Quit: Lost terminal)
14:12:25 --- quit: user10032 (Quit: Leaving)
14:14:27 --- quit: dshin_ (Quit: Connection closed for inactivity)
14:19:15 <fnodeuser> https://www.youtube.com/watch?v=W7rOLRlqhuM
14:19:17 <bslsk05> ​'Barilla | Masters of Pasta with Roger Federer & Davide Oldani (Extended Version)' by Barilla (00:02:00)
14:20:42 --- part: qrf left #osdev
14:26:03 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
14:34:48 --- join: dbittman (~dbittman@2601:647:ca00:1651:b26e:bfff:fe31:5ba2) joined #osdev
14:38:10 --- join: SN4T14 (~SN4T14@81.4.104.33) joined #osdev
14:39:33 --- quit: mavhq (Read error: Connection reset by peer)
14:39:59 --- quit: Shamar (Quit: leaving)
14:40:48 --- join: mavhq (~quassel@cpc77319-basf12-2-0-cust433.12-3.cable.virginm.net) joined #osdev
14:48:58 --- quit: Halofreak1990 (Ping timeout: 256 seconds)
14:52:36 --- quit: mavhq (Ping timeout: 265 seconds)
14:53:53 --- join: kimundi (~Kimundi@p57A89CFB.dip0.t-ipconnect.de) joined #osdev
14:55:10 <kimundi> Hi, is anyone online right now? I have a general osdev question regarding interrupts and low-level coroutines
14:55:39 <heat> sure
14:55:39 <sortie> Yes
14:56:11 <sortie> kimundi: Note that people might be upstairs getting coffee or cooking. Ask your question and maybe you'll get an answer when the right people have time. :)
14:56:19 <sortie> Welcome!
14:58:35 --- join: mavhq (~quassel@cpc77319-basf12-2-0-cust433.12-3.cable.virginm.net) joined #osdev
14:58:40 <kimundi> Alright! So, basically as apart of an (ungraded) university course I'm implementing a small x86_64 operating system, and my group has trouble with the current next implementation step
14:59:47 <heat> yeah?
15:00:06 --- quit: uvgroovy (Ping timeout: 248 seconds)
15:00:22 <kimundi> what we have working right now is cooperative scheduling - we have a scheduler that knows a number of tasks with an own stack each; the scheduler can start scheduling a task, and the running task can call into the scheduler to cause a context switch to another task, which involves assembler code for swapping out the register values and all that
15:00:36 <heat> yeah
15:00:53 <kimundi> now we are supposed to implement preemptive scheduling - that is switching the running task from a timer interrupt
15:01:07 <heat> yea
15:01:23 --- quit: john51_ (Ping timeout: 246 seconds)
15:01:28 <kimundi> but we are having trouble figuring out how to correctly do a context switch while we are in an interrupt
15:01:36 <heat> Ah, okay
15:01:54 <heat> Basically, you want to switch stacks before returning to the assembly dispatcher
15:02:21 <heat> See, when you're dispatching IRQs you save every register
15:02:35 <heat> when you're exiting the IRQ, you undo those saves and pop things back
15:02:49 <heat> if you swap the stack, you'll simply pop another thread in
15:03:10 <doug16k> the ISR should save the outgoing context, then call something that chooses the next thread, then return the context pointer for the new thread, and the asm stub restores context using the returned pointer
15:03:44 <doug16k> the context pointer could mean the stack pointer, but exactly what it means depends on how you do it
15:04:25 --- join: john51 (~john@15255.s.t4vps.eu) joined #osdev
15:04:31 --- join: adu (~ajr@pool-173-66-242-131.washdc.fios.verizon.net) joined #osdev
15:05:04 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
15:05:12 <doug16k> oh and you pass the context pointer (pointing to the outgoing thread's context) to the function that chooses the next thread. that thing would save the context pointer for the outgoing thread in that thread's info structure
15:06:45 <kimundi> okay - that sounds reasonable! Sadly it doesn't really fit out course-given structure of the interrupt handler and scheduler :(
15:07:02 <heat> that's stupid
15:07:04 <doug16k> the idea is, when thread A gets interrupted, the timer interrupt handler saves the context and remembers where thread A's context was saved. later, when you eventually switch back to thread A, you return that pointer that was provided when you switched from A
15:07:32 <kimundi> heat: Honestly I'm starting to think its just a oversight/bug in the assignment itself
15:07:53 <doug16k> the code after the scheduler call restores the context using the return value, which is the context pointer for the thread that you want to continue
15:08:17 <heat> kimundi: How do those look?
15:10:29 --- quit: Asu`afk (Remote host closed the connection)
15:11:15 <doug16k> most likely, they are expecting the ISR to push everything, call some C, in the C, save the current thread's stack pointer, pick another thread, return that thread's stack pointer, return to the asm, switch stack to the new threads stack, pop everything, return
15:11:16 <kimundi> So, basically interrupts in the course os are split into two parts: a simple part that always handles the interrupt and exits fast, and that can optionally enqueue a heavy part for later execution. The idea behind that is that non-iterrupt guard can set a lock around critical data structures, and only runs the heavy parts of any interrupts that happen during the lock when the lock gets released
15:11:38 <kimundi> doug16k: something like that, yeah
15:12:07 --- join: freakazoid0223_ (~IceChat9@pool-108-52-4-148.phlapa.fios.verizon.net) joined #osdev
15:12:38 <kimundi> Anyway, this design means that the heavy part can run as part of an interrupt handler directly, if there was no outer lock, or delayed in non-interrupt code when there was a lock
15:12:54 <kimundi> and the assignment tells us the context switching is supposed to happen during the heavy part
15:13:09 <kimundi> so the context switich can either get triggered inside a interrupt handler, or not
15:13:10 <doug16k> that has nothing to do with multithreading. you are describing drivers that use deferred procedure calls
15:13:16 --- quit: freakazoid0223 (Ping timeout: 255 seconds)
15:13:17 <heat> the heavy part is usually done with threads
15:13:33 <heat> doug16k: Sounds like a more direct DPC without threads
15:13:44 <kimundi> doug16k: Oh, I never meant multithreading - all single threaded
15:14:08 --- join: uvgroovy_ (~uvgroovy@199.188.233.130) joined #osdev
15:14:08 <heat> no, you really meant multithreading
15:14:08 <kimundi> yeah, the professor meant there are many names for this conecept, and deffered procedure calls is one of them
15:14:13 <doug16k> you are describing handling device code by essentially injecting a signal into the interrupted thread
15:14:43 <heat> wait, no, multitasking
15:14:44 <doug16k> device interrupt handling code*
15:14:49 --- quit: adu (Quit: adu)
15:14:51 <kimundi> heat: ah, yeah, multithreading, but not parallel one :P
15:15:19 --- quit: dennis95 (Quit: Leaving)
15:15:32 <doug16k> it's not multithreading at all. you seem to be describing hijacking the interrupted thread to process a device IRQ, and returning into the interrupted code from there
15:15:49 --- quit: John___ (Read error: Connection reset by peer)
15:16:13 --- quit: sortie (Quit: Leaving)
15:16:19 --- quit: uvgroovy (Ping timeout: 264 seconds)
15:16:20 --- nick: uvgroovy_ -> uvgroovy
15:16:40 <kimundi> doug16k: I'm not sure I would call it any more hijacking as what any interrupt does inherently
15:17:20 <doug16k> it solves nothing. if the code that was interrupted has acquired a lock the device's driver needs, it will still deadlock
15:18:05 <doug16k> normally deferring a device's IRQ handling solves that problem - but only if it can allow the interrupted code to continue and eventually release the lock
15:18:16 <kimundi> Anyway, it boils down to "interrupt happens, and then depending on wether the interrupted code is in a critical section, the handling code is delayed until a safe moment"
15:18:52 <heat> Is a critical section an actual lock or just a cli?
15:18:52 <doug16k> for example: let's say a disk driver needs to acquire a lock to issue a disk read. let's also say that someone called read, and the read code issuing a new read has acquired the lock. now the disk IRQ happens, and the interrupt handler needs to acquire that lock: deadlock
15:19:26 <kimundi> heat: Actual lock in the "boolean variable" sense
15:20:27 --- quit: xerpi (Quit: Leaving)
15:20:59 <kimundi> (I could also just link to the university site where all the code and assignments live, but its in german and very incrementally structured :P)
15:22:08 <heat> link pls
15:22:17 <heat> I trust in google translator
15:22:28 <kimundi> doug16k: Stilly trying to process what you wrote - but just as explanation - the idea of the deferring in this case is just to prevent race conditions betwen code in an interrupt and code in user code that acesses the same data structure
15:22:55 <heat> HUH
15:23:06 <kimundi> heat: Heh, okay :) https://ess.cs.tu-dortmund.de/Teaching/WS2017/BSB/Aufgaben/index.html
15:23:06 <bslsk05> ​ess.cs.tu-dortmund.de: Übungen zu BSB
15:23:10 <heat> no, you're trying to prevent deadlocks
15:23:18 <kimundi> bslsk05: lol, yeah
15:23:19 <heat> inb4 the code has no locks
15:23:24 <heat> kimundi, it's a bot
15:23:26 <kimundi> ah
15:23:31 * kimundi feels silly
15:23:56 <heat> kimundi, which task?
15:24:07 <kimundi> 5
15:25:01 <kimundi> But seeing how its always an adventure to understand what needs to be done for those assignments I'm not to hopefully for anyone else understanding it ad-hoc :)
15:25:47 <kimundi> Actually, I guess it would almost make more sense to link our implementation, hmm
15:25:59 <heat> link it
15:26:12 --- join: uvgroovy_ (~uvgroovy@199.188.233.130) joined #osdev
15:26:59 <kimundi> Its a private github repo, I'm actually not sure if I can link that without making it public
15:27:21 <heat> I think you can
15:27:25 --- quit: frolv (Ping timeout: 252 seconds)
15:27:50 --- quit: uvgroovy (Ping timeout: 240 seconds)
15:27:50 --- nick: uvgroovy_ -> uvgroovy
15:29:46 <kimundi> Quick google say no. The issue is also that even if I link it I still need to explain how the 15+ source files that interact with this correlate to each other and its 00:30 am here
15:30:07 <heat> I can figure out(I think)
15:30:18 <heat> I really hope the code isn't in german
15:30:21 <heat> if so I'm rip
15:31:02 <kimundi> heat: The api is english and we try to keep all our comments english - but the comments that are part of the assignment are german :)
15:31:11 --- quit: chjk6x_ (Remote host closed the connection)
15:31:23 <heat> that's ok, just link it
15:31:42 --- join: chjk6x_ (~chjk6x@adsl196-127-86-206-196.adsl196-3.iam.net.ma) joined #osdev
15:31:50 --- quit: sprocklem (Ping timeout: 240 seconds)
15:31:51 <kimundi> heat: One moment
15:32:26 --- quit: mechanist2 (Quit: mechanist has disapparated...)
15:34:02 --- join: frolv (~frolv@CPE00fc8d4905d3-CM00fc8d4905d0.cpe.net.cable.rogers.com) joined #osdev
15:36:45 --- quit: chjk6x_ (Ping timeout: 268 seconds)
15:41:48 <MrOlsen> How's everyoing today?
15:41:58 <heat> I'm okay, you?
15:42:23 <kimundi> heat: Alright! https://github.com/Kimundi/oostubs-temp
15:42:24 <bslsk05> ​Kimundi/oostubs-temp - None (0 forks/0 watchers)
15:43:21 <kimundi> heat: now I need a writeup of how to navigate this mess of source files ^^
15:43:27 <MrOlsen> Doing alright, trying not to let winter get me down
15:45:02 <MrOlsen> I dusted off my OS a few weeks ago and forgot how much work I have left to do to get a solid libc working
15:46:18 --- join: gdhoward (~gdh@2605:a601:639:2c00:e0d9:bf28:2735:8659) joined #osdev
15:50:00 --- nick: gdhoward -> gdh
15:50:10 <heat> kimundi: Alright, waiting for ya
15:50:27 --- join: svk (~svk@p2003006A650D8E002D861F0EC1F70E60.dip0.t-ipconnect.de) joined #osdev
15:50:38 --- quit: uvgroovy (Quit: uvgroovy)
15:51:03 <kimundi> heat: For starters, here: https://gist.github.com/Kimundi/080e80d791694a3ec68d2bc37bed3a38
15:51:05 <bslsk05> ​gist.github.com: gist:080e80d791694a3ec68d2bc37bed3a38 · GitHub
15:51:10 <kimundi> Will update that gist further now
15:51:26 --- join: adu (~ajr@pool-173-66-242-131.washdc.fios.verizon.net) joined #osdev
15:52:31 <heat> bslsk05: Is the dispatcher the dpc thingy?
15:53:02 <kimundi> heat: no
15:53:10 <kimundi> Its, hang on -
15:54:00 --- quit: bitch (Ping timeout: 248 seconds)
15:54:34 <kimundi> /guard/guardian.cc has the interrupt handler, and the dpc thing is gat.prologue(), followed optionally by the relay() call below it that ends up doing the deferred call (Called epiloge)
15:55:15 --- quit: xenos1984 (Quit: Leaving.)
15:57:07 --- quit: awang (Ping timeout: 264 seconds)
15:57:44 <heat> kimundi: So, what exactly is your problem?
15:58:24 <heat> to me it looks like you'll just need to switch threads during the deferred call
15:59:20 <heat> so you need to create a gate that executes your task switch
15:59:32 <heat> and then just schedule it
16:00:33 --- join: aosfields_ (~aosfields@71.194.3.30) joined #osdev
16:00:37 <kimundi> heat: Updated the gist
16:00:50 --- join: bitch (hctib@gateway/shell/elitebnc/x-masvsrcowxkhctoj) joined #osdev
16:01:24 <kimundi> heat: The issues is that the Gate can end up running either during an interrupt, or after one
16:01:43 <heat> and what's the issue with that?
16:01:57 <heat> you ultimately just want to run it
16:01:58 --- join: sprocklem (~sprocklem@unaffiliated/sprocklem) joined #osdev
16:02:10 <heat> 1) this is a flawed and slow system that doesn't work well
16:02:30 <kimundi> If I'm inside a interrupt and switch task, then the interrupt never returns inside the new task if that task did not also get interrupted inside a interrupt
16:02:32 <heat> 2) You can easily starve the system if you lock that thing too much
16:02:54 <heat> kimundi, huh?
16:03:13 <MrOlsen> kimundi: Why would the interrupt never return?
16:03:31 <MrOlsen> are you talking about an IRQ interrupt?
16:04:10 <kimundi> If I'm inside the interrupt handler, and do a context switch - which in this code base is not the saem as the register saving the interrupt does itself - then I switch to a different stack and execution than the one in which im currently inside a interrupt handler
16:05:04 <MrOlsen> kimundi: your ISRs whould be in kernel space not user space so it wouldn't matter how many times you change stack at application level
16:05:18 <kimundi> Basically, this code has register saving/restoring for when a interrupt triggers, and register saving/restoring for context switches, and they are two independent mechanism
16:05:31 <heat> well
16:05:50 <MrOlsen> and you can handle your interrupts many ways, for example if your interrupt is your system call interuppt you can let the schedule switch contexts
16:05:54 <kimundi> Isn't the ISR still running on the same stack?
16:06:01 <MrOlsen> it will eventually switch back into that interrupt and it will return
16:06:03 <heat> if you're switching to a task you're also setting the IP right?
16:06:18 <kimundi> heat: IP?
16:06:23 <heat> instruction pointer
16:06:34 <heat> IP, EIP, RIP, lots of names
16:06:42 <MrOlsen> kimundi: your ISR should not be using the same stack
16:06:57 <MrOlsen> if your int is being called from ring 3
16:07:05 <heat> MrOlsen: it's not
16:07:34 <kimundi> MrOlsen: Is that a "should" in the sense of "the hardware does work differently", or a "should" in the sense of "this is best implemented as..."? :)
16:07:35 <MrOlsen> your trap frame will have esp0 your stack pointer for your kernel space and you'll have another pointer to your userspace stack
16:07:53 <heat> kimundi: both
16:08:15 <MrOlsen> kimundi: if this is x86/64 the cpu will do this as long as your software follows through with tat
16:08:58 <kimundi> Maybe a bit of basic knowledge will help me here: What exactly does the cpu do automatically in regard to registers, stack, etc when a interrupt triggers, and what does the "iret" instruction do compared to "ret"?
16:09:18 <geist> it just inverts it
16:09:29 <heat> an interrupt pushes an interrupt stack frame to the kernel stack
16:09:30 <geist> (basically)
16:09:36 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
16:09:39 <heat> iret pops the interrupt stack frame
16:09:50 <heat> ret just pops a return address
16:10:04 <MrOlsen> well
16:10:06 --- join: hppavilion[1] (~dosgmowdo@58-0-174-206.gci.net) joined #osdev
16:10:25 <heat> the thing is that iret also restores a stack, segment registers, IP, and might even switch rings
16:10:40 <geist> there are details, but functionally speaking iret does the opposite of what the hardware did when it took the interrupt in the first place
16:10:47 <MrOlsen> kimundi: you are still responsible to save all registers
16:10:48 <geist> it unstacks things from the stack and puts the state of the cpu back
16:10:51 <MrOlsen> pusha popa
16:10:57 <MrOlsen> in your ISR
16:11:54 --- quit: adu (Quit: adu)
16:12:06 --- quit: mmu_man (Ping timeout: 276 seconds)
16:12:46 <MrOlsen> also how are you defining your interrupts
16:12:59 <MrOlsen> if you do a trap gate there are extra things pushed into the kernel stack
16:14:17 --- join: awang_ (~awang@cpe-98-31-27-190.columbus.res.rr.com) joined #osdev
16:14:21 <MrOlsen> over your task gate
16:16:38 --- quit: quc (Ping timeout: 246 seconds)
16:16:46 <kimundi> Okay, I'm still horribly lost in all this :)
16:17:21 <MrOlsen> http://wiki.osdev.org/Interrupt_Descriptor_Table
16:17:22 <bslsk05> ​wiki.osdev.org: Interrupt Descriptor Table - OSDev Wiki
16:17:43 --- join: uvgroovy (~uvgroovy@2601:184:4980:e75:6d8c:671b:a48a:2633) joined #osdev
16:18:19 <kimundi> Seeing how its past 1 am now, I'll head to bed, and tommorrow I can ask the prof directly how this is supposed to be done with his framework. But I think I'll be back afterwards or for latter assignments :)
16:18:38 <MrOlsen> sounds good
16:19:47 <kimundi> MrOlsen: The idt code starts here: https://github.com/Kimundi/oostubs-temp/blob/marvin-aufgabe-5/startup.asm#L234 Its a macro that generates 256 handlers that just call the same code with an integer parameter
16:19:49 <bslsk05> ​github.com: oostubs-temp/startup.asm at marvin-aufgabe-5 · Kimundi/oostubs-temp · GitHub
16:19:54 --- quit: uvgroovy (Client Quit)
16:20:07 --- join: uvgroovy (~uvgroovy@2601:184:4980:e75:8418:6808:4dc:f86c) joined #osdev
16:21:04 <MrOlsen> kimundi: I'll check it out
16:21:26 <heat> How did you guys implement a MAP_SHARED file mapping? I'm stuck at marking the page dirty, do I need to check the page tables of every process that has the page mapped?
16:21:30 <kimundi> the actual setting of the idt entry happens below that I think
16:21:39 --- join: king_idiot (~ahaslett@124.157.115.222) joined #osdev
16:21:49 <heat> checking everything sounds sub-optimal
16:22:10 <MrOlsen> heat: with or without anon?
16:22:11 <kimundi> Thanks everyone for your help so far!
16:22:22 <heat> MrOlsen: file backed
16:22:27 <heat> Anon is easy
16:22:42 --- quit: alphawarr1or (Quit: Connection closed for inactivity)
16:22:54 <heat> I want to mark the page as dirty in my internal book keeping as to write back the contents
16:23:10 <kimundi> Just to review - you would say that the proper way to do context switches is to work directly with the register you saved on an interrupt, and letting the interrupt restore a different set before its returning?
16:23:51 <heat> I know I could use the D bit, but checking for it in every mapping is not optimal as far as I can see
16:24:25 <geist> heat: what do you mean?
16:24:40 <geist> you mean you want to harvest the modified bit of all the mappings?
16:25:01 <MrOlsen> you can do that.. you dont have too much space unfortunately in the page table... in my task structure I keep track of shared pages and only free when the last process finishes
16:25:33 <heat> I want to harvest the D bit of all the MAP_SHARED mappings as to be able to schedule the writeback thread to run and write the modified pages
16:26:01 <heat> back to disk
16:26:26 <geist> yeah that's basically hard. two ways to do it: map read onkly by default and trap the write page fault and use that as a signal
16:26:34 <geist> you gotta do that on architectures that dont have a D bit (like ARM)
16:26:34 <MrOlsen> msync on exit
16:26:45 <MrOlsen> or wait for the task to call an msync
16:26:56 <geist> or, you have a background thread that regularly goes around and harvests D bits from the page tables by scanning them
16:27:01 <heat> right now, whenever write_vfs writes to a page it sets the dirty bit in the struct page_cache so that page gets committed
16:27:18 <geist> either virually by walking page tables, or by page, and you have to store a reverse list of mapping -> { aspace, vaddr }
16:27:20 <MrOlsen> brb my son is hungry
16:27:32 <latentprion> Then go feed him
16:27:48 <heat> geist: That sounds like my best bet, trapping writes would kill performance
16:27:53 <geist> different oses do it differently. classic unix does it the latter, by scanning by physical page, reverseing back to all the places its mapped at currently
16:28:26 <geist> heat: thing is trapping writes is the only way to do it on most architectures. most dont have a D bit. it has some advantages, that you know precisely when it happens, and it's an edge thats triggered exactly once (until you mark it again as read only)
16:28:35 <geist> vs having to continually spend effort scanning page tables. it's sort of a toss up
16:33:43 <kimundi> Just in case someone is still interested right now, I'm scetched up a where I see the issues with the context switch control flow as given by the assignment: https://gist.github.com/Kimundi/8c91fe5d2c197c971a181280c2c8dcb2
16:33:45 <bslsk05> ​gist.github.com: issue.md · GitHub
16:34:49 <heat> I think I have a strategy: Maintain a list of MAP_SHARED file-backed virtual regions created at mmap time, update thread fires at a reasonable interval, goes through the list and marks things as dirty as needed
16:35:28 <geist> key is yuo can also do it based on pages that you've decided are shared or not
16:35:33 <geist> ie, pages with map count > 1
16:35:46 <geist> that way you onlyu do it for exactly the page that is potentially double mapped, and no more at all
16:36:14 <geist> oh i see what you mean by map_shared, not that it's double mapped, it's that you're actually tracking mods for it
16:36:27 <geist> but same thing applies, you could set a bit on the page structure that marks it shared, then put it in another queue
16:36:40 <geist> then just walk pages taht are known to be mapped at any given time as shared, and nothing more
16:36:46 <geist> should make the walk less expensive
16:37:25 <geist> ie, track it from the bottom up (the physical page) isntead of top down (the virtual region mapped shared)
16:37:32 <heat> All that I want to know is if the page was written to(is dirty)
16:37:37 <geist> there are advantages to either, but in general the physical up is more straightforward
16:37:59 <geist> and has the advantage of doing potentially less work, because it only has to consider pages that are known to be mapped somewhere
16:38:10 <geist> vs all virtual regions, even if they're unpopulated
16:38:41 <geist> downside of bottom ups is you need tracking structures to reverse a page back to it's mapping object, but you almost always do anyway
16:38:49 <heat> when you say bottom up, do you mean struct page up or struct page_cache(segment of the page cache) up?
16:38:56 <geist> struct page
16:39:33 <geist> it's classically what unixes scan at. they basically walk through pages at a fairly constant rate, reversing to where they're mapped, doing page table maintenance
16:39:45 <geist> vs a top level scanning mechanism (I think NT is known to use this)
16:39:57 <geist> both have advantages/disadvantages, but bottom up is usually conceptually simpler
16:41:46 <heat> So, I need to maintain a list of shared mappings in struct page(with void *vaddr; struct process *process). Then I create a list of all the shared pages, daemon fires up in a couple of seconds, goes through the list and checks the PTE bit?
16:42:17 <geist> or a simpler way is to just be able to reverse page -> { object it belongs to, offset }
16:42:30 <geist> can do that with a bigass hash table, or a pointer+offset in the page structure
16:42:42 <geist> since object it belongs to should be able to reverse to all of the mappings of it
16:42:52 <heat> what object?
16:43:22 <geist> depends on how your VM is designed, but virtually all VMs have some sort of inner object that a page belongs to at most exactly one of
16:43:32 <geist> in many unices it's the vnode that backs it
16:43:48 <heat> all my virtual address regions store a file_description and an offset, is that it?
16:43:51 --- join: robert_|disconne (~hellspawn@24.96.111.33) joined #osdev
16:43:53 <geist> so you have an object that has a size and a list of physical pages attached to it
16:44:25 --- quit: robert_ (Read error: Connection reset by peer)
16:44:27 <geist> sort of. yeah in your case the file_description is a proto object. might want to formalize that into some actual abstract object that isn't always backed by a file, etc
16:44:39 <geist> in solaris, for example, it is precisely a vnode
16:45:00 <geist> ie, all mappings map a vnode of some type. and multiple mappings of the same vnode are simply that: multiple mapping objects pointing at the same vnode
16:45:14 <geist> and the vnode itself contains the list of physical pages that back pieces of it
16:45:26 <geist> that way when you double map something in multiple address spaces, they automatically share pages by design
16:46:08 <geist> in zircon, for example, i call it literally a Vm Object. it's an abstract object that holds a list of 0 or more pages, and VmMapping maps it somewhere
16:46:30 <heat> Uhm, I wanted to keep a list of pages at the vm region itself
16:46:39 <geist> what happens fi you map it twice?
16:46:49 <heat> map what?
16:46:54 <geist> a file/object/whatever
16:47:31 <heat> the kernel goes to the page cache and asks for a page for offset x(which might or might not already exist)
16:47:48 <geist> so the page cache is really the underlying vm object
16:48:01 <heat> Yes, sorta
16:48:03 <geist> and the vm region is a mapping of it
16:48:10 <heat> yeah
16:48:21 <geist> so you dont keep the pages at the vm region level, you keep them in the page cache
16:48:37 <geist> and the vm region just queries it, gets a page, maps it
16:48:49 <geist> and multiple regions would get the same page
16:48:57 <heat> problem is that vm region only asks the page cache for pages when it's mapping a file
16:49:12 <geist> otherwise it does what? gets a new page and zeros it?
16:49:18 <heat> yeah, basically
16:49:24 <geist> this is where an underlying abstract object would simplify the design
16:49:34 --- join: wcstok (~Me@c-71-197-192-147.hsd1.wa.comcast.net) joined #osdev
16:49:40 <geist> basically you have a OO 'vm object' that sits in between those two, and actually owns pages
16:49:49 <geist> and it's where that object gets new pages that is the abstract part
16:50:05 <geist> then you're setting yourself up for copy-on-write, because you can also build object chaints
16:50:19 <geist> and create an abstract object that sources from the underlying one, and thus copy-on-write is born
16:51:12 <geist> that way the region code has no special case, it simply maps whatever the vm object gives it, and the vm object transparently figures out where to get data from. then the page cache itself actually turns itself into a LRU list of vm objects
16:51:23 <geist> which is exactly how modern file caches work
16:51:34 <geist> a vm object (vnode, whatever you want to call it) per file
16:51:46 <geist> kept in a LRU list
16:52:54 <geist> anyway, just a suggestion, you're getting close to a fairly standard, functional, modern design
16:53:25 <heat> So, supposedly, vm_regions have an underlying vm_object, that actually "holds" the underlying pages?
16:53:47 <heat> and shared mappings just hold the same vm_object?
16:54:04 --- quit: robert_|disconne (Read error: Connection reset by peer)
16:55:02 <heat> If so, how am I supposed to support partial unmapping of the shared region if I'm going to affect another mapping somewhere else
16:56:47 <heat> geist ^^
16:56:50 --- quit: kimundi (Ping timeout: 240 seconds)
17:02:34 <geist> you just do. the mappings are independent
17:02:55 <geist> you just remove the mapping, but as long as there is still at least one ref to the vm object it sticks around
17:05:57 --- join: gerryg (~gerryg@124.246.16.1.static.nexnet.net.au) joined #osdev
17:05:57 <heat> geist: In zircon, why do VMOs have children and parents?
17:06:09 --- quit: listenmore (Remote host closed the connection)
17:06:16 <geist> because of copy on write
17:06:27 <geist> that's how you implement a chain of them. yuo can clone a vmo into another one, with the copy-on-write bit set
17:06:37 --- join: listenmore (~strike@2.27.123.231) joined #osdev
17:06:40 <geist> then any write to the clone creates a cow copy of that page
17:06:50 <geist> and reads go through to the parent vmo
17:08:07 <heat> if I effectively touch every page of that vmo and COW everything in, does the vmo stop having a parent?
17:08:25 <geist> it doesn't actually stop having a parent, but yes it functionally does
17:08:48 <geist> in classic unix there was a garbage collection thing there that eventually kicks in, throwing away objects that are completely covered by all their children
17:08:49 --- join: ljc (~ljc@unaffiliated/ljc) joined #osdev
17:09:00 <geist> zircon does not implement that yet
17:12:37 <heat> Oh, and if for example both of the mappings sharing a VMO unmap a specific page(which gets freed as the refcount reaches 0), the VMO removes it from its list?
17:12:43 <heat> I think I got it
17:13:58 <geist> well, the vmo itself keeps a list of all the children vmos (and they hold a ref to the parent) and the vmos also keep a list of all the VmMappings across all address spaces (and they also hold a ref back to the vmo)
17:14:19 <geist> and then user space can get a handle directly to the vmo and manipulate it directly (that's a fairly zircon specific feature)
17:14:52 <geist> basically the zircon VM takes inner designs that most VMs already have and exposes it out to user space as a kernel level object
17:15:15 <geist> generally this whole shadow copy object and whatnot has been around for years, it's just an internal implementation detail
17:17:46 --- join: adu (~ajr@pool-173-66-242-131.washdc.fios.verizon.net) joined #osdev
17:21:30 --- quit: gerryg (Quit: Colloquy for iPhone - http://colloquy.mobi)
17:21:39 --- quit: daniele_athome (Ping timeout: 276 seconds)
17:21:57 --- join: JusticeEX (~justiceex@pool-108-30-196-198.nycmny.fios.verizon.net) joined #osdev
17:23:11 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
17:25:19 --- quit: adu (Quit: adu)
17:29:02 --- quit: uvgroovy (Remote host closed the connection)
17:29:59 --- join: zwliew (uid161395@gateway/web/irccloud.com/x-dajyaeqnkbnxuipd) joined #osdev
17:30:48 --- quit: gamozo (Read error: Connection reset by peer)
17:32:15 --- nick: swgillespie -> swgillespie[GT]
17:34:49 --- quit: listenmore (Read error: Connection reset by peer)
17:35:16 --- join: listenmore (~strike@2.27.123.231) joined #osdev
17:41:17 --- quit: aosfields_ (Ping timeout: 265 seconds)
17:41:21 --- quit: gdh (Quit: Leaving)
17:41:23 <graphitemaster> geist, ping
17:41:34 <geist> PONG
17:41:42 <graphitemaster> geist, I found that sad I mentioned yesterday
17:41:44 <graphitemaster> https://twitter.com/aionescu/status/948817335980142592
17:41:44 <bslsk05> ​twitter: <aionescu> The mitigations poor OS vendors have to do for "Variant 2: branch target injection (CVE-2017-5715)" aka #Spectre make me cry/die a little inside. https://pbs.twimg.com/media/DSrgLFZVwAAoov6.jpg
17:41:47 <graphitemaster> prepare to be sad
17:42:21 <graphitemaster> call sleds
17:42:27 <geist> wat.
17:42:37 <_mjg> ye
17:42:51 <_mjg> all these years fighting branches, i-cache footprint and whatnot
17:43:00 <_mjg> pretty much wasted
17:43:03 <geist> kind of feel sorry for the cpus now
17:43:06 <_mjg> (well, not really, but..)
17:43:18 <Mutabah> And things like SYSCALL/SYSENTER making syscalls less expensive
17:43:19 <geist> like now they have so much more work to do
17:43:25 <graphitemaster> geist, what that looks like here http://82.181.198.151/pub/cpu1.png
17:43:49 <graphitemaster> that's just a patched hypervisor too
17:44:00 <graphitemaster> 30-100% relative cpu usage increases depending on the workload
17:44:05 <Mutabah> https://randomascii.wordpress.com/2018/01/07/finding-a-cpu-design-bug-in-the-xbox-360/ - Semi-related to recent vulns
17:44:05 <bslsk05> ​randomascii.wordpress.com: Finding a CPU Design Bug in the Xbox 360 | Random ASCII
17:44:20 <geist> yah i was thinking that KVM style hypervisors where the bulk of the emulation logic is in a user space process will be especially hurt
17:44:29 <geist> iirc i think hyper-v uses some sort of similar user hosted process
17:44:55 <geist> yah was reading the 360 one a while ago. what a bad idea of an instruction
17:45:36 <geist> can probably sploit the shit out of the host os with that. thankfully no one cares about 360 anymore
17:46:23 <Mutabah> Its heart was in the right place... but would have needed some clever trickery with the L2 cache controller to actually be correct
17:46:48 <graphitemaster> I think the moral of the story is to _never_ subvert the CPU cache hiearchy
17:47:10 <Mutabah> (a flag indicating that the memory was in L1 cache only or something, causing a cache flush when something else needs it)
17:47:21 <geist> yah you can get a similar set of that if you violate ARMs rules and map pages with dissimilar cache bits
17:47:40 <geist> you can basically instruct the cpu to violate its own cache coherency, but then yolo. your problem now
17:47:49 <graphitemaster> the call sleds are highly tuned to throw speculative execution off
17:47:58 <graphitemaster> the hard part is the deeper the pipeline the deeper the needed sled
17:48:13 <graphitemaster> it's almost like a weird science at this point
17:48:14 <geist> seems like any of those call sleds would just get defeated with a cpu with a longer speculation
17:48:27 <geist> but then i guess they just figture out what the deepest one is and went for that
17:48:48 <geist> i vaguely wonder how badly the P4 is suceptable to all of this
17:48:51 <geist> but then i really dont
17:49:24 <geist> i actually inheirited a P4 machine the other day, got it back from my dad. used to be my main box for a short period of time back in 2001 or so
17:49:44 <_mjg> :)
17:50:09 <geist> it's a first gen, 1.8Ghz I believe. hyperthreaded. but since it's 32bit only i dont care to run OS code on it or whatnot
17:51:11 <graphitemaster> pentium 4 had a barrel shifter
17:51:16 <graphitemaster> so it's already a better CPU
17:51:24 <graphitemaster> bit hacks actually ran well on it.
17:51:24 <geist> oh no you didunt
17:51:28 --- quit: daniele_athome (Ping timeout: 260 seconds)
17:51:36 <geist> yeah so did the K5, but folks dont sing its praises now
17:51:46 <graphitemaster> the K5 caught on fire if you looked at it funy
17:52:01 <geist> yah but it ran the distributed.net RC5 stuff like a bat out of hell
17:52:09 <graphitemaster> fair enough
17:52:26 <graphitemaster> I really miss barrel shifters
17:52:32 <geist> i had one for a while, wish i hadn't tossed it
17:52:41 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
17:52:46 <graphitemaster> ironically Intel Atom has one, except it's not exposed, it's u sed to implement integer divide
17:52:59 <graphitemaster> it's also one of the reasons atom sucks at integer divide :P
17:53:06 <graphitemaster> 8 bit divide is one cycle
17:53:12 <graphitemaster> but anything else is super expensive
17:54:00 <_mjg> so intel claims meltdown is unfixable in microcode. curious if there is a special case where you would be able to go 32-bit abi style, but still access all the regs
17:54:09 <_mjg> and if that mode would in fact fix the problem
17:54:30 <_mjg> not that it would be applilcable to current software stack, but there were x32 ideas in the past
17:57:14 <Mutabah> _mjg: How would that fix meltdown? By truncating the address space so userspace can't address the kernel?
17:57:26 <svk> pentium 4 even supports branch prediction hints, spectre should work a treat on these
17:58:27 <_mjg> Mutabah: that's the basic idea, but it is unclear if it would fix it
17:58:59 <_mjg> too bad x32 was not adopted
17:59:00 <Mutabah> _mjg: Yeah, afaik taht won't work. Aparently even segment limits don't work to prevent meltdown.
17:59:09 <Mutabah> Same problem as with paging, the afult doesn't stop the read
18:03:05 <graphitemaster> There is a trivial solution to meltdown.
18:03:32 <graphitemaster> Just flush all of cache
18:06:04 --- quit: kasumi-owari (Ping timeout: 255 seconds)
18:06:17 <heat> or disable it
18:06:33 --- join: kasumi-owari (~kasumi-ow@ftth-213-233-237-007.solcon.nl) joined #osdev
18:11:12 <graphitemaster> https://android.googlesource.com/platform/packages/apps/Launcher3/
18:11:13 <bslsk05> ​android.googlesource.com: platform/packages/apps/Launcher3 - Git at Google
18:11:17 <graphitemaster> nice merge
18:14:40 --- quit: return0e (Ping timeout: 256 seconds)
18:15:30 --- join: return0e (~return0e@87-102-105-145.static.kcom.net.uk) joined #osdev
18:22:51 --- quit: svk (Quit: Leaving)
18:22:52 --- quit: empy (Ping timeout: 252 seconds)
18:23:05 --- join: empy (~l@c-24-56-245-159.customer.broadstripe.net) joined #osdev
18:23:05 --- quit: empy (Changing host)
18:23:05 --- join: empy (~l@unaffiliated/vlrvc2vy) joined #osdev
18:32:47 --- quit: Dreg (Quit: Dreg)
18:33:42 --- join: Dreg (~Dreg@fr33project.org) joined #osdev
18:37:54 --- quit: nekomune (Ping timeout: 272 seconds)
18:39:17 --- quit: Belxjander (Ping timeout: 265 seconds)
18:40:04 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
18:46:16 --- quit: ljc (Quit: ayy)
18:47:13 <doug16k> graphitemaster, flush the cache when? that can't solve meltdown. control doesn't even leave the user program in meltdown attack
18:47:29 --- join: eremitah_ (~int@unaffiliated/eremitah) joined #osdev
18:48:41 <graphitemaster> doug16k, flush it on timer interrupt
18:49:09 --- quit: eremitah (Ping timeout: 268 seconds)
18:49:12 --- nick: eremitah_ -> eremitah
18:49:26 --- quit: ohnx (Ping timeout: 265 seconds)
18:49:51 <Mutabah> Flush on iret/sysret maybe?
18:50:05 <geist> doug16k: sort of. part of meltdown is the supervisor data has to be in the L1D cache (which is virtually indexed)
18:50:23 --- quit: Belxjander (Ping timeout: 268 seconds)
18:50:24 <geist> so seems that if you completely dumped the L1 cache on kernel -> user transition you'd also avoid it
18:50:49 --- join: nekomune (~nekomune@comfy.moe) joined #osdev
18:51:15 <geist> if it weren't in the L1D then the cpu has to go do a TLB lookup to figure out which physical page it is, and then it nerfs the speculative read. that's my understanding of it anywa
18:53:23 <geist> but that beingsaid there is no 'just flush L1' instruction on x86, so can't easiy do it
18:53:32 <geist> you could on arm though
18:54:01 <Kazinsal> Yeah unfortunately best you can do is WBINVD
18:54:15 <doug16k> you don't need to do any transitions to read kernel memory
18:54:32 <doug16k> it will speculatively load it into the cache
18:54:50 <geist> hmm, that brings up a qiestion: if the L1 is virtually indexed, then where does cached page table entries go?
18:54:59 <Mutabah> geist: TLB?
18:55:14 <Mutabah> And hits L2 when checking entries?
18:55:16 <heat> what about invd?
18:55:17 <geist> well, what I mean is the cached page table structure itself. i guess maybe those are only cached in L2?
18:55:20 <Mutabah> Page walks are expensive as is
18:55:21 <doug16k> when your attack reads kernel memory it will pull it in
18:55:31 <geist> because the TLB has the cached inner walking cache
18:55:47 <doug16k> Mutabah, page walks are ~ 15 cycles
18:55:51 <Mutabah> doug16k: From what I understand, it requires that the attacked memory be in a very close cache (for the read to succeed before the permissions check)
18:55:58 <geist> yeah, iguess that makes sense. probably doesn't cache the *contents* of the PTEs in the L1, but may in L2. the structure of the page tables are in the TLB though, that's the inner page table cache
18:56:06 <wcstok> invd's evil, also isn't available if you have sgx enabled (not that anyone cares, probably)
18:56:33 <geist> yah there's the new clflush, but that only works on one line at a time (iirc
18:56:38 <doug16k> when you do the attack properly, the instruction that reads the kernel memory doesn't even retire
18:56:51 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
18:56:58 <heat> oh crap, invd doesn't write anything back
18:57:06 <Kazinsal> Yep
18:57:06 <wcstok> That's why it's evil :)
18:57:13 <geist> yeah that's like 'toss this shit i dont care why'
18:57:16 <Kazinsal> INVD is more like CRASHPROGRAMRANDOMLY
18:57:26 <geist> ARM has some equivalent, but like invd it's absolutely supervisor only
18:57:31 <Kazinsal> "Memory coherency? The fuck is that?"
18:57:35 <geist> because there's no safe way to use it unless you know what you're doing
18:58:01 <Kazinsal> INVD throws a GPF if CPL > 0 but still, holy shit no bad
18:58:06 <geist> i suppose you could use it, for example, to just dump the cache of a page just before you throw it back in the free list
18:58:11 <geist> since who cares if you write it back
18:58:24 <doug16k> invd is useless unless you are a handful of instructions into the POST
18:58:29 <geist> might actually look into doing that on arm, but i suspect it's not a win
18:58:39 <Kazinsal> INVD sadly takes no args
18:58:49 <geist> yah, so it's only global and thus largely useless
18:59:23 <geist> if anythin i suspect it's only really useful for precisely one reason: cache is disabled, clear all the tags so that it's in a consistent state
18:59:30 <wcstok> wbinvd's evil too, for different reasons =/
18:59:34 <geist> you have to do that on arm, since the default state of the TLB and cache tags is UNDEFINED
18:59:56 <geist> and the cpu starts with cache and mmu turned off
19:00:25 <heat> https://stackoverflow.com/questions/41775371/what-use-is-the-invd-instruction
19:00:27 <bslsk05> ​stackoverflow.com: assembly - What use is the INVD instruction? - Stack Overflow
19:00:32 <geist> haha stackoverflow
19:00:39 <doug16k> Mutabah, the first loop it might not be cached, but the speculated load will pull it in. the attack has no requirement that the kernel accessed anything, the user program causes the loads
19:00:43 <heat> even tells you how to enter cache as ram mode
19:00:53 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
19:01:14 <geist> yep. the AMD BKDG also explains it too
19:01:42 <geist> basically the first few lines of ROM as the cpu powers up and before you touch any memory you get into cache-as-ram mode and then bootstrap the rest of the bios (bringing up the dram controller, for example)
19:01:43 <Mutabah> doug16k: Hmm... depends on if the memory is actually accesed even if the TLB lookup indicates that the permissions check fails
19:01:55 <Mutabah> doug16k: Which I don't know. Good point though
19:02:15 <doug16k> the bug is it doesn't really check until the instruction retires. if it never retires, it never checks
19:02:46 <Kazinsal> Oh god that SO post is giving me terrible ideas
19:02:50 <Kazinsal> terrible, awful, hilarious ideas
19:02:56 <heat> hahahaha
19:03:01 * geist feels proud that a wide variety of low level hackers got their computer architecture powered up this week
19:03:04 <doug16k> the attack would be pretty useless if it could only read lines that happened to still be in L1 brought in by the kernel
19:03:42 <Kazinsal> cache-as-RAM for super fast forwarding plane lookups
19:03:55 <Kazinsal> just turn the L2 cache into a MAC table
19:03:59 --- join: bcos_ (~bcos@1.123.1.51) joined #osdev
19:04:00 <geist> the average ARM SoC usually has 128KB of local SRAM or whatnot for precisely this
19:04:11 <geist> since pinning the cache/tlb is IMPLEMENTATION DEFINED in arm world
19:04:27 <heat> the best mitigation is to add a completely random number to the high precision timer so you can't use timing attacks
19:04:46 <heat> boom, completely safe
19:05:05 <geist> not very high precision then. all you've done there is just reduced the precision
19:05:21 <doug16k> heat, you can make your own precision timer. spin on timer++ in an infinite loop in another thread
19:05:31 <geist> i think the whole 'second core runs a tight loop' has mostly nerfed the whole notion that the being able to read the high precision timer is a problem
19:05:39 <heat> disable threading too :D
19:05:48 --- quit: hppavilion[1] (Read error: Connection reset by peer)
19:05:59 <geist> hell, disable SMP
19:06:00 <Kazinsal> non-SMP OSes now somehow back in vogue
19:06:04 <Kazinsal> damn, beaten
19:06:05 <geist> put the kernel on one cpu
19:06:08 <geist> heh
19:06:19 <doug16k> I think asserting RESET# permanently would be a perfect fix for every security hole
19:06:36 <geist> awww now it's just a denial of service
19:06:51 <Kazinsal> denial-as-a-service
19:07:09 <geist> no way, denied!
19:07:24 <Kazinsal> party on, dudes
19:07:28 --- quit: tacco\unfoog ()
19:07:36 <Kazinsal> (wait, no, that was bill and ted)
19:07:39 --- quit: bcos (Ping timeout: 268 seconds)
19:07:50 <Kazinsal> (gettin my 80s mixed up)
19:08:06 <wcstok> Have the OS run all userspace with trap flag set, that should make it near impossible to tell the difference between something in cache and something loaded from a floppy disk
19:08:57 <variable> hello humans!
19:09:08 <variable> wcstok: nah, just disable the cache alltogether
19:09:17 <heat> nah
19:09:32 <wcstok> That's less fun, at least the kernel code would run reasonably well
19:09:47 <variable> memory is faster than it used to be
19:09:49 <variable> who needs cache
19:09:56 <variable> (actually, that's a lie, but shush)
19:10:10 <heat> run userspace with the trap flag, with all code mapped as a file-backed mapping of a floppy
19:10:30 <geist> well, you could just disable cache when user space is running
19:10:33 <doug16k> you absolutely need the instruction cache. the data cache is more optional though if you had a really deep scheduler
19:10:36 <geist> kernel can run cached, in all its glory
19:10:39 <Kazinsal> I ought to run memtest86+ on this 8700K rig
19:10:51 <geist> DO IT. NOW YOU WILL KNOW
19:10:54 <Kazinsal> I would love to see what the L1, L2, and L3 run at
19:11:01 <geist> dont hesitate. do it now
19:11:15 <geist> you'll never know. do it do it do ti doti doit doit doitdtiot
19:11:51 <Kazinsal> I appreciate your enthusiasm
19:12:01 <wcstok> My favorite part of these bugs is it was other people who got to deal with them over Christmas while I had a nice 3 week vacation, yay for nasty bugs
19:12:11 <Kazinsal> oh hey AIDA64 has a cache benchmark now
19:12:12 <Kazinsal> get in
19:12:31 <variable> question is when does my betting pool expire
19:12:39 <geist> wcstok: i was a littel miffed i wasn't let in on it
19:12:55 <geist> but then, well, zircon has 0 users now, so i guess it technically wasn't impacting us
19:13:25 <geist> turns out lots of folks i worked with knew about it for months :(
19:13:27 <wcstok> All I got was bare minimal details a couple months ago when one of the ones in-the-know was going on a rant
19:13:28 <Kazinsal> I wouldn't be surprised if there were people attempting to run zircon as daily driver
19:13:44 <geist> Kazinsal: oh man, sorry for those folks
19:14:14 <doug16k> they want $200 to tell you DRAM timings in aida64 D:
19:14:15 <geist> but i do expect some day someone will just bust out some completely different subsystem on top of the kernel, and i will rejoice
19:14:32 <wcstok> Rant started out with "Don't talk about this to anyone else or too loud", then the patches started rolling in with "Don't ask for details about what this is doing, blah blah"
19:14:54 <geist> you can tell the freebsd folks were a bit salty about being left out in the cold on this one
19:14:55 <heat> wcstok, patches for what?
19:14:59 <Kazinsal> GOD DAMN
19:15:00 <variable> doug16k: aren't DeWitt clauses fun?
19:15:07 <Kazinsal> 8700K's L1 runs at 2 TB/s read
19:15:20 <Kazinsal> 900 GB/s write
19:15:22 <variable> o.O
19:15:25 <variable> that's fast
19:15:26 <variable> or slow
19:15:28 <variable> I duno
19:15:30 <variable> its a number
19:15:34 <wcstok> esx
19:15:39 <Kazinsal> L2 is 700 GB/s read, 500 GB/s write
19:15:44 <geist> Kazinsal: that is a number.
19:15:53 <Kazinsal> L3 is 360 and 230
19:16:01 <variable> Kazinsal: what are you using to measure this? and what systm?
19:16:10 <geist> you can probably reverse that to cache lines per second and it'll line up with what you expect
19:16:18 <geist> something like 2 or 3 cache lines of 64 bytes a cycle, perhaps
19:16:23 <Kazinsal> for comprasion this DDR4-3200 is running at about 44 GB/s read/write
19:16:39 <variable> we're up to DDR-4 now?
19:16:41 <Kazinsal> variable: AIDA64 Engineer's cache and memory test/benchmark
19:16:45 <variable> o
19:16:49 <heat> variable, uh ye
19:17:00 <variable> oh, I remember buying DDR-2 RAM
19:17:01 <geist> so something like a 4 Ghz machine should be able to do 4GB/sec if it were one byte/cycle
19:17:16 <Kazinsal> CPU is an i7-8700K at 4.8 GHz
19:17:16 <geist> so assume it can load a cache line a second, that's 4*64 (256GB/sec)
19:17:31 <graphitemaster> geist, nawh, openbsd was exta salty about it :P
19:17:33 <Kazinsal> So ballparking it yeah that seems about right
19:17:35 <geist> now multiply that by a few because it has this deep ass load/store pipeline with multiple load/store units
19:18:01 <geist> you get up in the right ball part of hundreds of GB/sec to TB/sec
19:18:05 <variable> graphitemaster: eh?
19:18:05 <geist> graphitemaster: oh i hadn't read that one
19:18:18 <Kazinsal> apparently theo was doing theo things
19:18:21 <geist> i just went with what was on the main page
19:18:22 <Kazinsal> didn't read what exactly
19:18:31 <heat> probably arc4random talks
19:18:44 <graphitemaster> btw, slightly unrealted, but since minix runs in all intel CPUs... you could also leak and gain access to ring -3 minix stuff as well
19:18:50 <heat> or "how to break the standard"
19:19:02 <variable> FYI - https://reviews.freebsd.org/D13797
19:19:04 <bslsk05> ​reviews.freebsd.org: ⚙ D13797 PTI for amd64.
19:19:33 <graphitemaster> geist, https://marc.info/?t=151521438600001&r=1&w=2
19:19:38 <bslsk05> ​marc.info: 'Meltdown, aka "Dear Intel, you suck"' thread - MARC
19:20:11 <Kazinsal> "Handling of CPU bugs disclosure 'incredibly bad': OpenBSD's de Raadt"
19:20:13 <Kazinsal> ah, theo
19:20:16 <Kazinsal> never not be you
19:20:26 <variable> theo and linus
19:20:36 <variable> two people that make life worse for developers
19:20:37 <variable> :\
19:20:40 <heat> .theo
19:20:40 <glenda> That's not going to happen.
19:20:45 <heat> .theo
19:20:45 <glenda> Sigh.
19:21:05 <heat> .theo
19:21:05 <glenda> Does it help anything? No.
19:21:07 <Kazinsal> I mean Linus has to janitor the king shit of Free Software culture
19:21:18 <geist> nice. theo managed to get 'salmanella spinach' into the discussion
19:21:49 <Kazinsal> I would not be surprised if he's trying to out-drink the ghost of lemmy
19:21:53 <geist> https://marc.info/?l=openbsd-tech&m=151521473321941&w=2 actually, totes agree with him here
19:21:57 <bslsk05> ​marc.info: 'Re: Meltdown, aka "Dear Intel, you suck"' - MARC
19:22:00 <Kazinsal> theo on the other hand
19:22:01 <Kazinsal> well
19:22:29 <Kazinsal> dude embraces both south african culture and canadian culture in a beautiful hybrid that isn't afraid to tell the world in detail how and where to shove it
19:22:34 <geist> despite the fact that i work for The Man, I'd like to think i'm helping add some sort of diversity and choice to the world
19:23:04 <graphitemaster> working for The Man is a dangerous game.
19:23:31 <geist> a most dangerous proposition....
19:23:39 <geist> but danger is my middle name!
19:23:49 <graphitemaster> You could even say it's irresponsible.
19:24:04 <graphitemaster> but The Other Mans want you to pay bills.
19:24:10 <geist> oh shit, i think I'm The Man
19:24:10 <doug16k> Kazinsal, that's impossible to believe. 145 bytes per cycle eh?
19:24:13 <geist> crap!
19:24:38 <heat> wait, did the linux folks have non-public info?
19:24:38 <graphitemaster> geist, the only winning move is not to The Man.
19:24:39 <Kazinsal> doug16k: that's less than three cache lines
19:24:45 <graphitemaster> heat, yes.
19:24:49 <geist> doug16k: that's only like 3 cache lines per cycle. i think there are that many load/store units
19:24:51 <doug16k> Kazinsal, it's nonsense
19:25:00 <Kazinsal> considering how deep we have seen speculative execution on modern Intel processors go in the past week and a half
19:25:02 <heat> why the fuck didn't they patch it before then?
19:25:07 <geist> iirc, i think skylake fetches something like 256 bytes or so to feed the instruction unit
19:25:10 <variable> Also, aren't cycles kind of a questionable construct
19:25:16 <variable> heat: Linux broke the emargo
19:25:24 <doug16k> which op loads 3 512-bit registers?
19:25:27 <variable> doug16k: yeah, that's actually less than I'd expect ...
19:25:42 <Kazinsal> speculative fetching my man
19:25:45 <doug16k> it only does one load per cycle
19:25:46 <geist> doug16k: multiple ops loading say 256 bits
19:25:55 <geist> doug16k: multi issue mang
19:26:06 <Kazinsal> unroll yourself a bunch of AVX-512 moves
19:26:17 <graphitemaster> well, Linux did not break the embargo. Some people commited some code to Linux and some public people caught on and made theories
19:26:18 <Kazinsal> watch the speculative pipeline go "OH FUCK YA BUD"
19:26:21 <doug16k> dude, it can't decode that many per cycle
19:26:21 <graphitemaster> and then eventually people found out
19:26:26 <graphitemaster> so they were like "fuck it"
19:26:32 <doug16k> avx-512 instructions are huge
19:26:42 <geist> doug16k: why do you say that? if it can multi issue it can very well load multiple
19:26:51 <geist> anyway, t's still irrelevant, all yo uneed to do is load one byte out of a cache line
19:26:52 <doug16k> how many load pipes are you claiming it has?
19:26:55 <geist> and it fetches the entire cache line
19:26:57 <heat> graphitemaster: You mean the KAISER patches?
19:27:02 <graphitemaster> yep.
19:27:13 <geist> so you can basically issue a big pile of load base; load base + 64; load base + 128; ...
19:27:31 <graphitemaster> holy shit http://www.businessinsider.com/tillerson-mattis-trump-north-korea-strike-2018-1
19:27:32 <bslsk05> ​www.businessinsider.com: Tillerson and Mattis are reportedly trying to hold Trump back from striking North Korea
19:27:38 <graphitemaster> forget CPUs
19:27:40 <geist> doug16k: i think a skylake has at least 2, if not 3. i seem to remember it still being a thing that it has over ryzen
19:27:45 <graphitemaster> your president is trying to nuke NK
19:27:52 <variable> o.O
19:27:52 <geist> graphitemaster: take it somewhere else
19:27:53 <variable> O.o
19:28:38 <Kazinsal> it's not hard to need a whole cache line at once these days
19:28:58 <geist> but yeah, you only need to touch one byte out of it, the load/store unit operates on a cache line at a time
19:28:59 <heat> graphitemaster: Wouldn't it need to go through congress first?
19:29:14 <variable> can we not discuss that stuff here plz
19:29:25 <geist> yeah 'm here precisely to avoid having to think about the real world
19:29:26 <heat> ye ye sry
19:29:29 <doug16k> it has 2, according to agner fog. port 2 and 3
19:29:33 <geist> only way to stay sane
19:29:47 <graphitemaster> I wouldn't, but my phone just screamed possible nuclear war / fallout and was offering me suggestions for bunkers and aid
19:29:52 <geist> doug16k: so assuming it only reads a single cache line at a time, that's 128 bytes/sec
19:29:59 <graphitemaster> hence why I got scared and linked it here.
19:30:12 <geist> dunno where the other bytes are , but it's pretty close
19:30:14 <heat> wait your phone warns you of a possible nuclear war?
19:30:30 <geist> guys shut the fuck up
19:30:36 <geist> take it somewhere else. now
19:30:44 <Kazinsal> seriously
19:31:19 <variable> geist: tyvm
19:36:45 --- join: chao-tic (~chao@203.97.21.86) joined #osdev
19:37:37 --- quit: JusticeEX (Quit: Lost terminal)
19:39:14 --- quit: zwliew (Quit: Connection closed for inactivity)
19:41:13 --- quit: epony (Quit: QUIT)
19:44:02 <geist> well that ruined the evening
19:44:10 <geist> guess i'll go shopping. need some groceries
19:44:17 * Mutabah rolls tumbleweed across the cahnnel
19:44:53 <heat> well sorry
19:46:08 * variable eats Mutabah
19:49:47 --- quit: Belxjander (Ping timeout: 246 seconds)
19:54:41 --- join: aosfields_ (~aosfields@71.194.3.30) joined #osdev
19:54:56 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
20:06:14 --- quit: Belxjander (Ping timeout: 248 seconds)
20:07:36 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
20:16:17 --- join: ohnx (~ohnx@unaffiliated/ohnx) joined #osdev
20:24:42 --- quit: heat (Remote host closed the connection)
20:24:54 --- quit: Belxjander (Ping timeout: 248 seconds)
20:29:20 --- join: navidr (uid112413@gateway/web/irccloud.com/x-smzgnpuodqsomkwf) joined #osdev
20:30:06 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
20:31:38 --- quit: chao-tic (Quit: WeeChat 1.0.1)
20:34:25 --- quit: variable (Quit: Found 1 in /dev/zero)
20:42:58 --- quit: davxy (Ping timeout: 260 seconds)
20:43:30 --- join: davxy (~davxy@unaffiliated/davxy) joined #osdev
20:46:28 --- quit: Belxjander (Ping timeout: 260 seconds)
20:49:08 --- quit: ahrs (Remote host closed the connection)
20:50:22 --- join: ahrs (quassel@gateway/vpn/privateinternetaccess/ahrs) joined #osdev
20:52:20 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
20:57:43 --- quit: king_idiot (Ping timeout: 264 seconds)
20:58:58 --- quit: zng (Ping timeout: 240 seconds)
21:01:42 --- join: epony (~nym@77.85.133.5) joined #osdev
21:03:00 --- quit: epony (Max SendQ exceeded)
21:03:57 --- join: epony (~nym@77-85-133-5.ip.btc-net.bg) joined #osdev
21:20:50 --- quit: aosfields_ (Ping timeout: 240 seconds)
21:22:10 --- quit: kasumi-owari (Ping timeout: 265 seconds)
21:23:56 --- join: kasumi-owari (~kasumi-ow@ftth-213-233-237-007.solcon.nl) joined #osdev
21:24:01 --- join: oaken-source (~oaken-sou@p3E9D257C.dip0.t-ipconnect.de) joined #osdev
21:25:33 --- quit: Belxjander (Ping timeout: 260 seconds)
21:26:26 --- quit: smeso (Remote host closed the connection)
21:31:18 --- join: smeso (~smeso@unaffiliated/smeso) joined #osdev
21:31:51 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
21:33:07 --- quit: davxy (Ping timeout: 264 seconds)
21:35:36 --- join: davxy (~davxy@unaffiliated/davxy) joined #osdev
21:38:30 --- quit: freakazoid0223_ (Quit: Some folks are wise, and some otherwise.)
21:39:07 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
21:52:50 --- quit: lordPoseidon (Ping timeout: 256 seconds)
21:55:13 --- join: elderK (uid205007@pdpc/supporter/active/elderk) joined #osdev
21:55:26 --- quit: Belxjander (Ping timeout: 246 seconds)
21:56:13 --- join: gdh (~gdh@2605:a601:639:2c00:e0d9:bf28:2735:8659) joined #osdev
21:56:39 <izabera> man strace | less +/Betchya
21:56:41 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
21:57:54 <variable> izabera: lol
21:59:00 <variable> izabera: tbf, dtruss > strace
21:59:28 <izabera> not enough betchyas in the man page
21:59:39 <variable> izabera: I really tempted to add a line to fbsd's /bin/true man page
21:59:54 <variable> "Unlike other implementations, .Nm always returns 0"
22:00:18 <variable> I like man pages with funnies
22:01:07 <variable> BUGS
22:01:07 <variable> sync() may return before the buffers are completely flushed.
22:01:13 <variable> is among my faviorite
22:02:40 <izabera> man perlvar has this: "Remember: the value of $/ is a string, not a regex. awk has to be better for something. :-)"
22:03:21 --- join: lordPoseidon (~lordPosei@14.139.56.18) joined #osdev
22:11:32 --- quit: pictron (Ping timeout: 256 seconds)
22:20:40 --- quit: Belxjander (Ping timeout: 248 seconds)
22:22:11 --- join: gamozo (~pleb@96.74.23.209) joined #osdev
22:26:19 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
22:30:48 --- quit: sprocklem (Ping timeout: 256 seconds)
22:33:02 --- join: sprocklem (~sprocklem@unaffiliated/sprocklem) joined #osdev
22:37:02 --- join: doug16k_ (~dougx@99.238.50.134) joined #osdev
22:37:42 --- quit: doug16k (Ping timeout: 248 seconds)
22:39:06 --- join: aosfields_ (~aosfields@71.194.3.30) joined #osdev
22:39:15 --- join: xenos1984 (~xenos1984@2001:bb8:2002:200:6651:6ff:fe53:a120) joined #osdev
22:44:13 --- nick: doug16k_ -> doug16k
22:45:00 --- join: user10032 (~Thirteen@90.209.104.11) joined #osdev
23:06:48 --- quit: ephemer0l_ (Ping timeout: 276 seconds)
23:07:02 --- quit: shymega (Ping timeout: 248 seconds)
23:11:21 --- quit: dbittman (Ping timeout: 276 seconds)
23:17:42 --- join: shymega (~shymega@torbaytechjam/shymega) joined #osdev
23:20:03 <doug16k> yay! my nvme driver is passing 8-cpu read stress with per-cpu command queues and msi-x irq-per-queue routed to appropriate cpu :D
23:23:59 <_mjg> nice
23:25:42 --- quit: shymega (Ping timeout: 248 seconds)
23:28:56 --- quit: aosfields_ (Ping timeout: 264 seconds)
23:32:41 --- join: ephemer0l (~ephemer0l@pentoo/user/ephemer0l) joined #osdev
23:34:50 --- quit: wcstok (Read error: Connection reset by peer)
23:37:01 --- quit: jack_rabbit (Ping timeout: 265 seconds)
23:37:38 <gamozo> but does it pass 16-cpu tests!?
23:37:52 <gamozo> oh, what NVMe are you running. M.2, PCIe, U.2?
23:38:07 <gamozo> i know it doesn't matter on the software side, but I was thinking of getting a U.2 server but the drives seem more obscure
23:40:15 --- join: quc (~quc@host-89-230-166-149.dynamic.mm.pl) joined #osdev
23:48:37 --- join: k4m1 (~k4m1@82-181-0-34.bb.dnainternet.fi) joined #osdev
23:48:46 --- quit: variable (Quit: Found 1 in /dev/zero)
23:49:22 --- join: jack_rabbit (~jack_rabb@c-73-211-181-224.hsd1.il.comcast.net) joined #osdev
23:49:39 --- quit: heinrich5991 (Quit: quit.)
23:49:54 --- quit: dude12312414 (Ping timeout: 240 seconds)
23:51:50 --- quit: transistor (Ping timeout: 256 seconds)
23:52:07 --- join: transistor (~trans@S01060018f8f95df7.vc.shawcable.net) joined #osdev
23:52:57 --- join: dude12312414 (None@gateway/shell/elitebnc/x-xpwjvjjuowcewzbx) joined #osdev
23:53:34 --- quit: user10032 (Quit: Leaving)
23:58:23 --- join: KidBeta (~textual@203.63.187.63) joined #osdev
23:58:46 --- quit: KidBeta (Changing host)
23:58:46 --- join: KidBeta (~textual@hpavc/kidbeta) joined #osdev
23:59:59 --- log: ended osdev/18.01.08