Search logs:

channel logs for 2004 - 2010 are archived at http://tunes.org/~nef/logs/old/ ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present


http://bespin.org/~qz/search/?view=1&c=osdev&y=19&m=8&d=1

Thursday, 1 August 2019

00:19:43 <ybyourmom> I have to find a way to diplomatically work my way around a supervisor who is saying on a pull request that me using a variadic arg list in a function (i.e, '...') makes the code complex
00:20:02 <ybyourmom> Specifically, he said that removing it will "keep the code simple"
00:20:36 <heat> technically he's correct
00:20:43 <heat> and they're very error prone too
00:21:07 <ybyourmom> Mm, but in the situation it's been used, the alternative is more complicated, and would require me to write a `switch` statement
00:21:31 <ybyourmom> Here's a direct quote from the pull request lol
00:21:34 <ybyourmom> "I think it's better to keep this code simple even if it means adding an ugly switch statement to the packetdrill interface."
00:22:48 <ybyourmom> So he's saying that between just making the function take and pass on a va_list, and writing a big switch statement to try to decode what the argument list potentially might look like by using the IOCTL command
00:22:56 <zid> varargs is nice from a "using it" pov but for most things the actual varargs function can be pretty grotty
00:23:03 <zid> printf is a nightmare to actually write cleanly
00:23:13 <heat> yeah
00:23:14 <ybyourmom> He would prefer the latter
00:23:36 <zid> printf ends up with a switch inside it regardless, but writing it without backwards-gotos is really fucking hard :p
00:26:46 <heat> right
00:26:53 <heat> and printf is also a nightmare security-wise
00:27:10 <zid> not that all varargs functions are printflike ofc
00:27:25 <zid> it might just be an 'infinite push but everything is the same type', but with the switch being involved it doesn't sound like it
00:32:16 * ybyourmom decides to go for the diplomatic "phrase dissent as a question" approach
00:40:38 <johnjay> zid: goto is under-rated
00:40:47 <heat> goto is overrated
00:41:02 <heat> destructors and RAII are the shit
00:41:18 <ybyourmom> C++ stand up
00:42:00 <heat> auto gang
00:42:23 <heat> who needs to type a variable's type? pffft
00:43:10 <zid> auto is the one thing I like about C++ that I've seen
00:43:17 * ybyourmom writes beautiful code which keeps error handling out of the path of execution
00:43:31 <zid> goto is plenty and great for that, ybyourmom
00:43:57 * heat does that too
00:44:01 * ybyourmom writes even more code without goto by wrapping resources in classes with destructors
00:44:09 <heat> zid: he said beautiful
00:44:17 <zid> It is beautiful
00:44:55 <heat> sorry, I cannot hear you over my templates and classes that minimize code duplication
00:45:35 <zid> >beautiful >templates
00:45:38 <zid> oh no it's retarded :(
00:45:41 <ybyourmom> xD
00:45:57 * ybyourmom uses templates a lot in his kernel
00:46:22 <heat> same
00:46:34 <Mutabah> generics > templates
00:46:56 <ybyourmom> Actually, one thing I *hate* and regret a lot is that I didn't know enough about my language before starting my kernel
00:47:15 <ybyourmom> so I allowed the animals in the OSDev community to convince me that using exceptions in kernel is a bad idea
00:47:22 <heat> ybyourmom, it is
00:47:28 * ybyourmom disagrees
00:47:38 <zid> It is
00:47:39 <heat> exceptions are a bit trash, and in the kernel they are super duper trash
00:47:42 <zid> it's also a bad idea in normal code
00:47:48 <heat> exactly
00:48:05 <ybyourmom> Mm, exceptions don't incur any runtime performance penalty for their use -- they only incur a penalty when an exception is actually raised
00:48:17 <heat> slow, bloated, require a bunch of support library code
00:48:20 <ybyourmom> But if you were getting off the hot path, then you were incurring a penalty anyway
00:48:27 <ybyourmom> No, they really aren't slow lol
00:48:34 <heat> They do if you think about it
00:48:46 <zid> How would they *possibly* not cause a performance penalty?
00:48:47 <heat> more pages to fault in
00:48:52 <zid> Think about how they must be implemented
00:48:56 <ybyourmom> zid: Because of how they are implemented
00:49:02 <zid> by... magic?
00:49:14 <heat> you need to unwind the stack and do eh_frame magic -_-
00:49:14 <zid> my cpu runs machine code, what the hell does your run?
00:49:16 <heat> that's slow
00:49:36 <Mutabah> zid: sysV exceptions use the DWARF debug information to dispatch the exception
00:49:47 <heat> oh, and construct an std::exception object
00:49:50 <zid> okay?
00:49:58 <Mutabah> Zero cost (except for some extra code that's never executed) when the exception isn't taken
00:50:03 <zid> last I checked, not doing that is faster than doing that
00:50:05 <heat> all that takes more time than returning a value
00:50:07 <Mutabah> but much more expensive when the exception is raised
00:50:09 <zid> Mutabah: how do you know when to take it?
00:50:18 <ybyourmom> There's a lookup table in the binary which is put together by your compiler -- when an exception is thrown, the compiler directs the CPU to trace back through the stack (this is just normal stack unwinding) until it finds a function in the lookup table which signals that it was catching that class
00:50:38 <heat> ybyourmom: that process is very slow
00:50:38 <ybyourmom> The looku table is only consulted when an exception is actually thrown
00:50:49 <zid> ybyourmom: how does it know when to throw it?
00:50:51 <ybyourmom> Yes, but it only occurs when an exception actually happens lol
00:51:02 <heat> zid, when you actually throw it
00:51:08 <zid> he said 0 cost
00:51:10 <zid> not "same as goto"
00:51:13 <ybyourmom> zid: In your code, you will do an `if (bad_condition) throw MyClass`
00:51:16 <zid> actually zero
00:51:31 <heat> exceptions are 0 cost when not thrown
00:51:35 <ybyourmom> But the difference is that you don't have to keep checking that and bubbling it up
00:51:41 <zid> except for the fact they're not
00:51:41 <heat> but they are very bloated and overengineered
00:51:45 <heat> I don't like them
00:51:50 <ybyourmom> You only need to do such checks in the very function where the condition can be detected
00:51:59 <zid> they're at *least* as complicated as goto, when not taken
00:52:03 <ybyourmom> >.>
00:52:07 <zid> and a billion times more when taken
00:52:33 <ybyourmom> I think you two should look into how exceptions and call-frame-info and landing spots are done by libsupc++
00:52:33 <heat> we never compared gotos to exceptions
00:52:39 <zid> kernel code is one where the 'taken' case is actually specifically normal, btw, which is why you definitely shouldn't use them in a kernel
00:52:40 <heat> because they're actually completely different
00:52:54 <heat> zid, is it?
00:53:12 <zid> heat: people are going to stat files that have been deleted etc
00:53:34 <zid> it's *normal* for kernel functions to fail a lot, not an exceptionally (heh) rare case where it can be really slow
00:53:39 <heat> that's probably not where you want to use them
00:53:53 <zid> so why use them
00:53:58 <heat> right
00:54:00 <heat> exactly
00:54:18 <zid> "I use them but only really rarely, so it's just confusing when you actually see them, because they're typically too slow" is not a good use case
00:54:50 <heat> the only place I can think of that you should use exceptions is in constructors because not using exceptions really limits what you can do in constructors
00:54:54 <ybyourmom> Exceptions are really, really, not that slow guys
00:55:31 <zid> depends if you want python performance or linux kernel performance
00:55:36 <zid> it's not slow for python!
00:55:54 <ybyourmom> The majority of kernel execution does not hit the error handling code :/
00:56:14 <ybyourmom> And you're not trying to optimize your kernel for people who pass in erroroneous parameters
00:56:19 <zid> you should be
00:56:20 <heat> depends
00:56:22 <ybyourmom> Exceptions make the hot path faster
00:56:29 <zid> I bet my kernel 'fails' thousands of times a second
00:56:33 <zid> they do not make it faster, wtf
00:56:42 <ybyourmom> They remove unnecessary checks from "bubbling" errors up to the caller
00:56:46 <zid> so now not only do they take 0 time when not taken, they now are NEGATIVE time?
00:57:01 <heat> running stat to check if files exist is frequent
00:57:13 <ybyourmom> zid: Yes, by friend -- that's the entire premise of out of band error handling >.<
00:57:25 <ybyourmom> this is why your CPU also uses exceptions by design
00:57:32 <heat> most times they fail(do strace gcc)
00:57:41 <ybyourmom> For example, you could have had to check for DIV by zero manually every time you did a DIV instruction
00:57:43 <zid> No my CPU uses exceptions because you literally can't create variables without them, so you're unable to write code to process them
00:58:02 <ybyourmom> But the CPU manufacturers realized that out of band error handling speeds up execution because exceptions are exceptional
00:58:28 <ybyourmom> Out of band error handling speeds up the hot path
00:58:29 <heat> I get it, but most people throw exceptions for random shit that shouldn't be an exception
00:58:30 <zid> You do realize someone is still checking for divide by zero there
00:58:33 <ybyourmom> This isn't controversial
00:58:39 <zid> it's just in the latency of the div retiring
00:58:42 <zid> not hardcoded instructions
00:58:50 <ybyourmom> zid: Correct -- and the same is true in your C++
00:58:58 <zid> Except it's not, because someone has to check!
00:59:03 <zid> there has to be real machine code checking
00:59:17 <zid> my cpu runs machine code, it does not have try-catch acceleration
00:59:34 <heat> also, likely and unlikely builtins really speed up your branches
00:59:40 <ybyourmom> So just like you don't have to bubble up and check manually for the "DIV by zero" in your ASM; in your C++ you don't have to bubble the error condition up to the caller and continually check for error returns from function calls
00:59:45 <ybyourmom> That's the point
00:59:54 <zid> why would anybody bubble up
00:59:58 <ybyourmom> You eliminate the bubbling-up of error checking from every function call
00:59:58 <zid> TCO has existed for 40 years
01:00:06 <ybyourmom> >.<
01:00:22 <zid> I literally just emit a jmp to the goto 20 levels up
01:00:36 <Mutabah> That only works when it is a tail call
01:00:43 <ybyourmom> zid: So when you call malloc(), you don't check for NULL?
01:00:46 <zid> it's error handling, of course it is
01:01:02 <zid> it's never going to make *more* stack frames and stay there, it's always going to go up
01:01:07 <ybyourmom> After each call, you, as the caller, don't check for the erorr that malloc() has bubbled up to you?
01:01:31 <ybyourmom> You don't put in that extra `if`, and take the penalty of the extra branch in your hot path?
01:01:32 <zid> ybyourmom: Do you really not get that if I do if(!x) return NULL; in 8 functions, I can just jmp to the outer one when I emit the machine code?
01:01:34 <heat> I can't imagine bubbling up as much as "construct exception object; call bloated_library_throw_exception; and then that bloated library which shall remain unnamed unwinds the stack"
01:01:57 <ybyourmom> zid: Your compiler is not doing that
01:02:00 <zid> it is
01:02:07 <ybyourmom> I'm pretty sure it's not lol
01:02:09 <zid> Because I watch it do it
01:02:29 <ybyourmom> If those functions are not all in the same compilation unit, no, the compiler is *NOT* doing it
01:02:34 <Mutabah> How can the inner call know where to jump?
01:02:36 <zid> lto doesn't exist either now?
01:02:50 <Mutabah> I guess if they've been inlined completely it'll happen
01:02:54 <Mutabah> but that's not the general case
01:02:57 <zid> Mutabah: I eat the same penalty as exceptions if it's 'dynamic' like that
01:03:03 <ybyourmom> >.<
01:03:06 <zid> but my actual penalty is lower because I'm not doing stupid things
01:03:14 <zid> like creating exception objects
01:03:17 <ybyourmom> Like checking for errors after callig functions???
01:03:37 <Mutabah> zid: I think you're missing the point about how C++ exceptions are implemented on non-MSVC platforms
01:03:57 <zid> go on then, what point
01:04:17 <Mutabah> zid: They do debuginfo based stack unwinding - on the happy path there's no checks made at all, and no registration of stack frames required
01:04:44 <ybyourmom> There is zero performance pentalty for the use of exceptions on the hot path
01:05:04 <ybyourmom> You take on a memory usage penalty because of the CFI and stack landing frame metadata
01:05:10 <zid> incorrect, there's at *best* the same as C, at worst, more
01:05:17 <heat> and libsupc++ too
01:05:17 <ybyourmom> No, wrong, sorry
01:05:33 <Mutabah> zid: Do you have evidence to back your claim?
01:05:38 <ybyourmom> In fact, once again, exception usage removes bubbling error checks
01:05:45 <ybyourmom> And speeds up the hot path -- it is faster than C
01:06:02 <Mutabah> https://itanium-cxx-abi.github.io/cxx-abi/abi-eh.html - Citation for unwinding based exception handling
01:06:09 <Mutabah> (an interesting read IMO)
01:06:19 <zid> On itanium?
01:06:41 <heat> itanium = every arch
01:06:42 <Mutabah> It's the same API used for nearly all C++ implementations nowadays
01:06:46 <ybyourmom> The Itanium ABI was used as the form factor for the libsupc++ API on all the platforms
01:06:47 <heat> ^^
01:07:03 <Mutabah> Exception is MSVC x86, which uses windows SEH
01:07:04 <zid> I'm so not reading this
01:07:09 <zid> but the fact this code even exists makes it slower
01:07:19 <zid> because it doesn't have to if you're not doing it at all
01:07:26 <ybyourmom> zid: We've been trying to explain to you that it actually makes the hot path faster :/
01:07:35 <heat> I know you're all arguing about unwinding the stack and all, but my major gripe with exceptions is that they require a lot of support library code which is just too bloated for a kernel
01:07:52 <zid> ybyourmom: except it doesn't, because the hot path is still 'goto' in the *best* case in yours
01:07:55 <Mutabah> There's some slowdown to exceptions - they increase the code image size which reduces cache efficiency and slows down laods
01:07:55 <heat> I refuse to link in libstdc++ or libsupc++ with my kernel
01:08:27 <zid> https://godbolt.org/z/G8UoKQ This isn't as clean as I'd like due to some clever reordering
01:08:30 <Mutabah> zid: That's the point we're trying to make - with this exception ABI, there is no goto related to exception handling on the hot path
01:08:39 <ybyourmom> Mutabah: Right -- but the cache effect only comes in when you actually hit an exception -- all the CFI info is stored in an different section of the binary
01:09:12 <ybyourmom> And actually the fact that you have fewer branches for checking for errors both makes the cache usage in the hot path more effective, and also invokes the branch predictor less often
01:09:19 <Mutabah> ybyourmom: The handling code for a function is in the function itself... probably out of the way, but would still take some cache space (probably)
01:09:20 <zid> That godbolt does the thing you said C can't do, btw
01:09:38 <zid> write that with exceptions and have it be 'faster' pls, in any case, failure or non-failure
01:10:20 <zid> It completely splits the failure and non-failure cases and turns it into a single test
01:10:32 <ybyourmom> Mutabah: Well, no -- actually you rarely actually handle an error inside the throwing function -- you clean up intermediate state that hasn't been commited as yet, and then you bubble the error upward for somebody else to handle, technically
01:10:44 <ybyourmom> and usually, the person handling it is 3-4 function calls up the stack
01:10:47 <Mutabah> A trivial case like that when everything gets inlined into one call is kinda unfair
01:10:58 <zid> Mutabah: so I'm cheating because it works? lol
01:11:08 <ybyourmom> And usually, it's actually the main() path of the program, tbh -- you only ever actually *handle* the error once in the main() path
01:11:11 <zid> "You can't compare exceptions to nicely written and well optimized C, that's cheating!" :p
01:11:34 <ybyourmom> all the other `if (error) {}` checks between to bubble it up just re-throw the error upward
01:11:36 <heat> you're not cheating you just gave a bad example that doesn't exist in the real world
01:12:16 <zid> idk what kind of errors you're handling, but mine all look suspiciously like 'free(); free(); close(); free()'"
01:12:24 <ybyourmom> Exceptions recognize the fact that you will only ever actually catch and handle the error in the main() path and they let you actually just handle it there
01:12:27 <zid> but you need to be able to jump to 'somewhere' in that sequence
01:12:33 <zid> so god invented goto err1; goto err2; etc
01:12:55 <zid> and infact, gcc will aggressively inline things into each other, so your middle function and your inner function etc all share the same 'goto' list
01:13:00 <ybyourmom> zid: that's what the RAII is for -- I think your issue is that you don't have a mental model for how C++ works
01:13:05 <Mutabah> You're "cheating" because you're using a trivial exaple that is very inlinable to demonstrate that this example is better in general
01:13:07 <zid> so you end up with goto err1 and goto err1 in different functions going to the *same* list
01:13:35 <zid> Mutabah: if your error handling is not inlineable it's probably dogshit slow with exceptions too, except I *also* get this case for free
01:13:43 * klange munches popcorn
01:13:55 <ybyourmom> Also, to be frank, you were going to unwind the stack as you bubbled the error upward anyway
01:13:56 <zid> It's because it's going across .so borders or something
01:14:35 <ybyourmom> The difference between C++ exception unwinding and the normal `if (error) { return -ENOFOO; }` recursive return spiral
01:14:46 <ybyourmom> Is that C++ uses a lookup table while unwinding
01:14:52 <zid> (except the recursive return spiral doesn't actually happen in practice)
01:15:08 <Mutabah> zid: It doesn't happen IF inlining happens
01:15:11 <zid> and if it does, it's across hard API boundaries, which you also have
01:15:42 <Mutabah> Inlining usually only happens for: single-use functions, and trivial functions
01:15:44 <ybyourmom> zid: what do APIs have to do with this?
01:16:04 <ybyourmom> When you switch between different APIs you stop checking for error returns from function calls?
01:16:07 <zid> ybyourmom: Can't think of a better term
01:16:12 <zid> anyone got a good one
01:16:26 <zid> Mutabah: You'd be surprised how aggressive and good inlining can be with LTO
01:16:44 <ybyourmom> I can't comment on the efficacy of LTO
01:17:01 <zid> it's as if everything from the 'public' api inwards is completely ABIless
01:17:03 <Mutabah> zid: Sure, but even it has limits
01:17:17 <Mutabah> and heuristics to what it will inline (or what is worth inlining)
01:17:24 <zid> (I still want a better term for this 'crossing an API barrier into a different compilation unit thing')
01:17:32 <ybyourmom> But I question whether or not it's really inlined to a simple `goto`, when that `goto` would be landing in an entirely different stack frame
01:17:56 <ybyourmom> And you would need to unwind the stack state to be able to viably jump to that `goto` landing point
01:18:07 <zid> So, give me the hot godbolt for the C++ version that's way faster than what I could do
01:18:08 <zid> I gave you mine
01:18:13 <zid> give me your 'cheating' version
01:18:31 <ybyourmom> That would need to be some VERY aggressive and *EXTREMELY* intelligent LTO function inlining
01:18:46 <zid> ybyourmom: there are no stack frames with aggressive inlining, that's sort of the point
01:19:06 <ybyourmom> zid: I am not cheating anything -- I've only been saying two things
01:19:12 <zid> all the err1 err2 err1 err2 err1 err2 across your three stack levels just turn into err1 err2 err3 err4 err5 err6 in a single one
01:19:19 <ybyourmom> (1) Exception handling makes the hot path faster
01:19:22 <zid> ybyourmom: You don't understnad, or weren't paying attention
01:19:24 <zid> wow still with your lies
01:19:39 <ybyourmom> (2) Exceptions only incur a performance penalty at the point of exception
01:19:41 <zid> I literally demonstrated a case where exceptions are *way way fucking worse* hot path wise
01:19:53 <ybyourmom> zid: could you repeat it?
01:19:56 <zid> https://godbolt.org/z/G8UoKQ
01:20:46 <ybyourmom> But that's not how you write exception based code, zid
01:20:54 <ybyourmom> I'll rewrite it for you, sec
01:20:56 <zid> I know, that's how you write code without exceptions.
01:20:56 <zid> I got told I was cheating, I want *your* cheat. What's your case where exceptions blow me out of the water because I'm cheating and getting a code snippet that is completely one-sided
01:21:24 <Matt|home> ...
01:21:56 <Matt|home> unless you're working for a company where programming is your job how exactly does one "cheat" at coding.
01:22:08 <Mutabah> Matt|home: Cheating at benchmarks
01:22:22 <zid> AI is a good cheat, it's just if statements!
01:22:26 <zid> don't tell the CEO!
01:22:37 <Mutabah> Providing an example that does not represent actual usecases to illistrate that some code is faster when that may not be the case
01:22:46 <Matt|home> fair enough. my apologies, just slightly on edge from arguments earlier today.
01:22:59 <zid> I'm letting him cheat back
01:23:02 <zid> so we're fair
01:23:09 <zid> I 'cheated' by using code that'd inline
01:23:13 <zid> I want him to cheat now
01:23:20 <Mutabah> zid: Ok, so - my cheat is that `h` in this example is C++ and can throw an exception instead of returning zero
01:23:49 <Mutabah> https://godbolt.org/z/n4WxA4
01:24:26 <zid> so you added 8 lines of code and don't even handle the exception in them
01:24:30 <zid> how is this you winning?
01:24:33 <ybyourmom> zid: https://godbolt.org/z/LD7jrj
01:24:37 <ybyourmom> Stop and look at this
01:24:45 <ybyourmom> this is how actual exception code would be written
01:24:49 <zid> ybyourmom: doesn't compile
01:24:53 <ybyourmom> It's C++
01:24:57 <zid> and?
01:25:01 <ybyourmom> And I didn't write it to compile
01:25:07 <ybyourmom> Just read it; please
01:25:09 <zid> You set the compiler to C++, compile it, I can't fix it I can't write C++
01:25:13 <zid> I wanted to look at the machine code
01:25:27 <Mutabah> zid: My example has exactly the same level of exception handling as yours
01:25:37 <Mutabah> If the call chain fails, it prints "oh no"
01:25:39 <zid> Mutabah: Except the hot path is the same and the cold path is way way way colder
01:25:47 <zid> this is not helping your case
01:26:07 <Mutabah> Sure - but that's the point - the hot path is shorter (no jumps in the hot path), at the expense of the cold path
01:26:20 <zid> The hot path is not colder though, you just shifted the if() into h
01:26:30 <zid> nice cheat!
01:26:43 <Mutabah> I'm assuming that when `h` fails, it throws
01:27:17 <Mutabah> a reasonable assumption in a C++ project
01:27:31 <zid> Sure, hot path is not actually hotter though that's sleight of hand
01:27:41 <zid> (which is what I asked for, so well done)
01:28:38 <Mutabah> A fairer example would be to compare a larger project with more complex logic and multiple different call trees
01:28:52 <zid> I don't think you'd be able to without pinning a lot of logic into other features, sadly
01:29:03 <zid> you'd end up with overloaded operators or templates or whatever trying to get the complexity out ahead of C
01:29:06 <zid> and then it wouldn't compare
01:29:06 <Mutabah> My guess is that explicit handling would be smaller in code, but would have a more complex control flow
01:29:39 <zid> constructors and destructors etc are better examples than that
01:29:42 <zid> so pretend I said those
01:30:27 <zid> That was fun
01:30:32 <zid> same time next week?
01:30:51 <Mutabah> Note - I'm not advocating that this exception handling method is DA BESTEST
01:31:08 <zid> nah you seem more neutral, ybyourmom is out here telling lies over and over :P
01:31:19 <Mutabah> It has its downsides, and I prefer predictable runtime errors to use explicit handling
01:31:24 <zid> I don't want to have to debug exceptions that's for damn sure
01:31:29 <Matt|home> exceedingly idiotic question, would you notice any significant performance increase at all if you had an entire, kernel userland and all written in asm in a modern day desktop, or would it basically be unnoticable unless you were doing benchmark testing
01:31:35 <Mutabah> but in terms of code speed for rarely taken paths - exceptions are pretty nice
01:31:43 <zid> Matt|home: asm would almost certainly be slower
01:31:49 <Matt|home> really?
01:31:51 <Mutabah> Compilers are smart
01:31:53 <zid> anything larger than a few hundred lines is impossible to maintain
01:31:57 <zid> and compilers are fucking clever
01:32:00 <Mutabah> and (mostly) infallible
01:32:06 <Matt|home> ah right.. the optomization stuff
01:32:11 <zid> It takes a *lot* of time to write code to be fast in assembly
01:32:21 <zid> so in that same time I've written a better scheduler or whatever, and you lose
01:32:36 <zid> If we had infinite time each, me writing my C and you writing your assembly, maybe eventually you'd end up out ahead
01:32:39 <zid> but that isn't how real life works
01:32:53 <Matt|home> gotcha. just silly curious
01:33:07 <zid> compilers will eventually beat the best assembly programmer, but the best assembly programmer will never be 100000th as fast as a compiler
01:33:16 <zid> in terms of compilation speed
01:33:33 <zid> and they're already probably better than almost all 'experts'
01:34:05 <zid> and there's nothing stopping you writing the most performance critical parts in assembly if you happen to notice the compiler fucked it up
01:34:16 <ybyourmom> zid: https://godbolt.org/z/DhwygH
01:34:24 <zid> so now I have 'all the rest of the code is structured better, but the hot part is the same'
01:34:33 <zid> so the compiler still wins in practice
01:34:42 <heat> and the compiler very rarely fucks up unless it doesn't know what you're doing(which for example, C++ is good at telling the compiler that)
01:35:25 <heat> it legitimately keeps data on every AMD/Intel/whatever cpu so it knows exactly the best instructions, registers, and code path to use
01:35:38 <zid> ybyourmom: so you added all the code at .L3 into the cold path, we both still test eax, eax; jne in the hot path
01:35:51 <zid> and then you *also* added L13, L7, and L15
01:35:53 <zid> into the code path
01:35:58 <zid> and that calls even *more* external code
01:36:03 <zid> exceptions are definitely not a win here
01:36:21 <Mutabah> Did any of us say the were a win when the exception is taken?
01:36:24 <zid> the hot path is the same (because you stole my C style error handling for most of it) then added a shit load of exception code
01:36:27 <zid> Mutabah: he keeps saying it
01:36:43 <zid> he's adamant I can't do that inlining, apparently
01:36:48 <ybyourmom> zid: NO, read it again
01:36:56 <zid> read what again
01:37:04 <ybyourmom> L13 exists in the C eample as well, and there's a RET there
01:37:16 <ybyourmom> Only the first 3 lines of L13 are executed, in BOTH C and C++
01:37:26 <zid> fine, ignore L13 then
01:37:28 <zid> argument stands
01:37:29 <ybyourmom> L7 is not executed on the hot path
01:37:32 <heat> guys, chill
01:37:45 <zid> I did not say L7 was in the hot path
01:37:45 <ybyourmom> And L15 is only invoked on exception
01:37:58 <ybyourmom> zid: Right -- the hot path is the same
01:38:37 <zid> So at *best* you got parity, but the cold path code is significantly slower, and there's potential that your cold-path blew your own hot-path out of icache
01:38:44 <ybyourmom> However, if there was not this very aggressive inlining case which was constructed specifically for your argument, there would be large performance gains from the fact that I eliminated your `IF` statements
01:38:54 <zid> I told you to construct your own argument, you just failed
01:39:06 <zid> not my fault
01:39:27 <zid> "if yours wasn't so damn fast because it didn't use exceptions, mine would be faster", well, no shit?
01:39:35 <Mutabah> Hey - let's not get at each other's throats
01:39:43 <zid> we.. are?
01:39:44 <ybyourmom> I have proven what I said: (1) Exceptions make the hot path faster, and (2) the performance penalty is only taken when the exception is thrown
01:39:48 <zid> news to me :D
01:40:01 <zid> "This is slower, QED it's faster", okay :p
01:40:15 <ybyourmom> I eliminated the `ifs` in your code, which would be there if this wasn't all in the same translation unit
01:40:24 <zid> The ifs are not in my machine code.
01:40:35 * klange is running out of popcorn
01:40:39 <ybyourmom> and the stack unwinding which was *not* generated by the compiler because of the inlining
01:41:13 <ybyourmom> Again, in the normal case, where you have functions calling outside their translation units, mine would be faster
01:41:25 <ybyourmom> In this case where there is inlining, I'm *as fast* as
01:41:45 <ybyourmom> We've been trying to explain just this -- nothing more
01:43:41 <zid`> My new router firmware is not as stable as it could be...
01:43:55 <zid`> br0: received packet on eth1 with own address as source address
01:43:57 <zid`> that seems bad
01:53:50 <ybyourmom> Matt|home: Are you miselin?
01:54:28 <Matt|home> hm?
01:54:43 <Matt|home> ybyourmom : no, i don't know who that is. do i sound like that person?
01:54:45 <geist> it's matt, at home
01:54:59 <Matt|home> correct
01:55:06 <geist> or | represents the outer wall of the home
01:55:07 <zid> geist is an undead, at home
01:55:13 <geist> and matt is outside
01:55:24 <Matt|home> https://github.com/mattaferrero/ <-- to avoid confusion, i usually have a bellsouth/at&t hostname when im at home recently.
01:55:38 <zid> I have muf.violates.me
01:55:39 <zid> I wish he'd stop
01:55:51 <geist> why am i undead?
01:55:58 <zid> geist: You're a geis?
01:55:59 <Kazinsal_> because you're a geist
01:56:02 * zid adds a t
01:56:04 <geist> oh hah. yes
01:56:19 <zid> forgetting your own name seems like a very undead thing to do
01:56:20 <ybyourmom> I thought ghosts were spirits of the dead
01:56:22 <geist> though i think geist is just more like 'spirit'
01:56:33 <zid> It's a direct cognate of 'ghost'
01:56:39 <ybyourmom> While the undead are the revived dead who are alive
01:56:41 <zid> dead but not dead
01:56:50 <geist> it's funny, i picked that like 25 years ago, and it was originally just start of LastnameFirstInitial
01:57:09 <geist> and turns out its also ghost in german, which was a bonus
01:57:09 <zid> Geis. T.
01:57:45 <Kazinsal_> I think my nick originally came from hitting the "random name" button in world of warcraft's character creator in 2005
01:57:47 <geist> twas basically the user named i needed to pick when setting up account on school computer
01:57:49 <zid> Good job I guess, the alternative was Telbrecht
01:57:52 <zid> err
01:57:57 <zid> AvisElbrecht
01:58:11 <geist> Kazinsal_: hah, yah. same with my character names i usually use online
01:58:11 <zid> mine just sounds nice and is short
01:58:23 <zid> My wow characters other than my first one is all cow puns
01:58:35 <zid> because I almost exclusively play tauren warriors
01:58:46 <geist> for stuff starting with Kaz was that undead?
01:58:53 <zid> Kaz'Modan
01:58:55 <Kazinsal_> yep
01:58:56 <zid> sounds dwarfish to me
01:59:11 <geist> or dwarf, yah
01:59:17 <Kazinsal_> later ended up using the name for a dwarf paladin
01:59:27 <Kazinsal_> maining orc rogue for classic though
01:59:38 <zid> I'm not sure I'll play it, I'm broke and I hate levelling in wow
02:00:02 <ybyourmom> Are there any good MMOs these days?
02:00:03 <Kazinsal_> I'm leading <Goon Squad> horde side this time around so it's going to be a world PvP adventure
02:00:27 <zid> I'm a main tank with world firsts :P
02:00:31 <zid> most pvp I ever did was a gimmick build
02:00:39 <zid> shield block value maxout that could oneshot clothies with crit shieldslam
02:00:53 <zid> err not world, realm, sorry, big difference
02:01:00 <geist> yah once i'm done with all this tripping i'll probably give the classic stuff another stab
02:01:12 <Kazinsal_> our raid composition is like, 40% rogues
02:01:16 <zid> lol
02:01:24 <Kazinsal_> this is going to be terrible for loot but HILARIOUS for open-world PvP
02:01:39 <zid> I wish I remember more of classic, other than how terrible it was to solo as a prot warrior then cmpared to now
02:02:25 <zid> I was a total noob in vanilla, took me until half way through tbc to get my shit together
02:03:10 <Kazinsal_> yeah I got carried through vanilla pretty hard. doesn't help that I was like, 12 at the time, I think]
02:03:38 <zid> Yea I wasn't much older
02:03:48 <zid> I missed the start of vanilla
02:04:00 <zid> gates were already open, I hit 60, did a bit of MC and ZG then tbc hit
02:04:17 <zid> Was getting 15fps on a geforce TNT2 :D
02:07:01 <ybyourmom> I played GW2 for a while and it was pretty good, but the storyline content and PvE content eventually comes to a close
02:07:10 <zid> I bought GW2 and heavily regret it
02:07:14 <ybyourmom> and then you're left with just grinding for shiny gear
02:07:18 <zid> No fishing, terrible game
02:07:32 <ybyourmom> Mm, I liked it a lot -- but I wish they had worked on PvP
02:07:48 <ybyourmom> the PvP is extremely good and satisfying, but they don't think it is worth working on
02:19:40 <geist> yah i enjoyed GW2 casually, but once it turned into a grind i stopped playing mostly
02:19:50 <geist> i log in every once in a while and its bewildering what changes every time
02:20:04 <geist> haven't sat down to try to see what's going on and get through all the changes
02:20:07 <zid> I had fun doing jumping puzzles for a day or two
02:20:16 <zid> tried to get my crafting skills or whatever up and found it basically impossible
02:20:19 <geist> i do remember it being kind of fun just bopping between a rotating list of world events
02:20:20 <zid> then found it out it had no fishing
02:20:30 <geist> and beating the huge world bosses with 200 other people
02:20:31 <geist> that's fun
02:21:50 <zid> geist pay my wow sub thanks
02:22:49 <drakonis> geist: have you experienced mounts yet?
02:23:49 <geist> in what game?
02:23:52 <drakonis> gw2
02:23:55 <geist> ah no
02:24:04 <geist> i really haven't played it much in the last 3 or 4 years
02:24:55 <drakonis> oh man you're missing out
02:25:04 <drakonis> gw2's mount system is a lot of fun
02:25:22 <geist> note: i am immune to 'collect the whole set' such nonsense
02:25:34 <geist> so if whaty our'e going to say is you can collect a bazillion mounts then it has no effect on me
02:25:37 <geist> a la WoW
02:25:38 <drakonis> oh you don't need to worry about that
02:25:43 <drakonis> its just 7 mount archetypes
02:25:49 <drakonis> they all play differently from eachother
02:25:54 <zid> I collect things until the first instance of somehting I can't collect appears
02:25:57 <zid> then I drop all interest
02:26:00 <drakonis> actually there's 8 now
02:26:16 <geist> and my main in WoW is a druid, so i dont need to collect any moounts
02:26:18 <geist> i *am* a mount
02:26:25 <zid> I still use my netherdrake
02:26:28 <zid> that bitch was a pain to get
02:26:43 <zid> They nerfed the requirements since though, which I always hate
02:27:05 <geist> yah that's the point. any amount of 'grind to get this' will just become obsolete in a few years
02:27:10 <drakonis> https://wiki.guildwars2.com/wiki/Mount
02:27:14 <geist> so really the journey is the point, not the treasure
02:27:20 <geist> since at the end it's just a record in a database
02:27:29 <drakonis> wow mounts are awful
02:27:37 <zid> netherdrake was fucking cool af
02:27:43 <drakonis> because they're all just mathematical increases to speed and don't have any feel
02:27:48 <geist> s either you really dig the fun to collecting it, and dont worry too much about how big your collection is, or you dont play
02:27:49 <zid> hardly anybody had it, I love a bit of prestige
02:27:51 <Kazinsal_> back in my day you took your money and you bought a mount
02:28:06 <Kazinsal_> oh what you want a better mount? no
02:28:12 <zid> Kazinsal_: back in my day, nobody did because the level 60 mouns were stupid expensive :P
02:28:14 <geist> zid: or yeah, temporarily they're cool looking
02:28:29 <Kazinsal_> congrats you hit 60. you can get a faster mount wearing armour and shit. it'll be ten times the cost of the last one.
02:28:32 <zid> I almost quit when they started giving tier models as pvp rewards
02:28:37 <Kazinsal_> want something faster than THAT? no fuck off
02:28:41 <zid> Kazinsal_: like 4 people in my raid group had epic
02:28:52 <geist> or if you're a drood you just go that fast
02:28:57 <zid> I don't know a single plate wearer who had one :P
02:29:02 <geist> or even better you jump off a cliff and turn into a bird halfway down
02:29:03 <ybyourmom> Yeah, world boss trains are fun in GW2
02:29:09 <geist> then land and insta switch to a land mount
02:29:13 <geist> totally kicks ass
02:29:21 <zid> armani war bear was another
02:29:24 <drakonis> my guy
02:29:28 <zid> but they actually locked that one partially at least
02:29:31 <drakonis> the rollerbeetle is the best thing about gw2 mounts
02:29:37 <zid> instead of making it 'free' once it was no longer current
02:30:16 <ybyourmom> I actually never got the rollerbeetle, but I got everything else
02:30:20 <ybyourmom> Well, I also don't have skyscale
02:30:25 <geist> zid: oh man i remember how much i ground for Cenarian (tier 1 druid) gear back in vanilla
02:30:28 <ybyourmom> But yeah, GW2 mounts are really fun
02:30:37 <ybyourmom> I should get rollerbeetle for fun
02:30:39 <geist> had to run MC for like 5 months to get half the pieces, then BWL
02:30:39 <drakonis> the warclaw is rad
02:30:44 <Kazinsal_> auuuugh the T1 grind
02:30:46 <geist> i still have that in my bank
02:30:56 <zid> I still never got my dungeon set 2
02:31:10 <ybyourmom> I won't ever get the Warclaw until they balance the classes for WvW and make WvW not a lagfest
02:31:18 <zid> I got full tier 7 4 days into wotlk and quit :(
02:31:22 <geist> you'll be waiting a long time
02:31:34 <geist> it's always out of alance one way or another, it's simply the way of things
02:31:43 <geist> you wait 6 months and they'll tweak it and it'll be out of balance some other way
02:32:21 <geist> i still remember the tail end of lich king when ArPen was completely out of balance
02:32:33 <geist> my feral druid had it maxxed out at like 75% or something ridiculous
02:32:40 <geist> and it got pretty silly
02:32:51 <zid> My favourite bit of wotlk is that they progressed the ilvls *so* far
02:32:55 <zid> that tanks hit max evasion
02:32:56 <geist> they got rid of it completely after that
02:33:07 <geist> yah druid tanks could do that i think
02:33:07 <zid> so they had to have a global -30% evasion buff in the last raid
02:33:38 <zid> pushing crit off the table? hold my bear I'm about to push hit off the table
02:34:20 <geist> yah crit and arpen, iirc
02:34:29 <drakonis> i'm hoping anet pushes out a new major update soon
02:34:31 <zid> I remember watching a rogue solo gruul
02:34:31 <geist> you tweaked your gear for either of those two
02:34:31 <drakonis> start LS5
02:34:44 <geist> yah evasion tanking rogues
02:35:02 <zid> wotlk started silly and ended silly
02:35:05 <zid> but the middle apparently was great
02:35:54 <geist> yah it was my favorite expansion
02:35:55 <zid> pretty sure a major guild got banned for skipping a phase in the arthas fight too?
02:36:11 <zid> I hated wotlk, I played the beginning where it was stupidly trivial (cleared naxx on day 4)
02:36:21 <zid> and then the end where all anyone cared about was 'LOL GEARSCORE'
02:37:19 <zid> I had a lot of fun with warlords of draenor
02:37:25 <zid> did 6/7 mythic in the first raid
02:37:36 <zid> not played since though
02:37:55 <geist> during the last expansion my old guild finally fell apart and i haven't really played since
02:38:03 <geist> though i'm still paying 6mo at a time for time on it
02:38:16 <zid> lemme play your account for vanilla :P
02:38:39 <geist> but we had a good run, lasted from about 2008-2017
02:38:47 <geist> mostly friendly lot
02:39:02 <zid> I liked friendly guilds but they never went anywhere
02:39:27 <geist> well, that was kind of our problem. we were nice a friendly, would raid weekly, but there was a growing faction of the want to be more hardcore folks
02:39:40 <geist> and they eventually disagreed with the guildmaster and then split off to form a new guild
02:39:44 <zid> I was never 'hardcore' I was just good :/
02:39:45 <geist> and then everyone else just sort of wandered off
02:39:45 <Kazinsal_> yeah my guild fell apart in the first raid of the most recent expansion
02:40:13 <geist> so i could go join them, i'm sure, but i was kindof miffed about the whole thing and just haven't really tried to play much since
02:40:18 <zid> My 'friendly' guild was wiping all day to the 3rd mythic boss or whatever, me having never made a mistake on a single fight
02:40:21 <drakonis> anyhow, i'm going to sleep
02:40:22 <geist> there are so many other games out there right now that i'm not hurting for stuff to play
02:40:27 <drakonis> wow's not my thing
02:40:36 <drakonis> slay the spire on the other hand
02:40:37 <geist> oh no you didunt
02:40:52 <zid> slay the spire is cool, I cbf to play it but I had fun playing it WITH a friend
02:40:53 <zid> over discord
02:40:57 <drakonis> yes i did
02:41:00 <drakonis> come at me
02:41:04 <zid> I cbf to click the actual cards and do the drudge work
02:41:08 <geist> sowry
02:41:13 <zid> but I liked picking the rewards and stuff
02:41:13 <geist> (been in canada the last week)
02:41:33 <geist> toss off
02:41:43 <Kazinsal_> hell yeah, canada
02:41:54 <zid> disregard canadia
02:41:57 <drakonis> canada is good
02:42:07 <zid> the west half is passable
02:42:51 <drakonis> but have you played modded slay the spire
02:43:01 <zid> nein
02:43:02 <geist> i have no iea what you're talking about, so no
02:43:17 <zid> It's a 1v1 turn based rpg
02:43:22 <drakonis> https://steamcommunity.com/app/646570/workshop/
02:43:24 <zid> but your commands are cards
02:43:39 <drakonis> its a really nice roguelike
02:43:44 <zid> it's not a roguelike
02:44:10 <drakonis> it has some attributes like the copious amounts of rng and permadeath
02:44:21 <zid> It's procedural
02:44:33 <zid> but it's definitely not a roguelike, infact, it's not even remotely like rogue :P
02:44:48 <zid> Closest I will allow is 'roguelite'
02:44:49 <drakonis> berlin interpretation strikes
02:45:08 <drakonis> its far more strategic than anything in the genre though
02:45:53 <drakonis> so, basically, the fun is that there's various combos you can do with each character
02:45:57 <drakonis> it is a work of art
02:56:10 * geist nods
02:56:19 <geist> usually i'm not a fan of cards based things
02:56:21 <geist> but that's just me
02:56:37 <zid> yea I cbf to actually play it, as mentioned
02:56:42 <zid> it's a lot of grunt work to actually play
03:02:47 <zid> geist: You going to audition for a role on Letterkenny now? I imagine they've had everybody in canadia on that show by now
03:02:58 <zid> people who've been there a month are probably more than fair game
03:07:46 <geist> zid: figure it out
03:08:10 <zid> I gotta pirate it all and read the credits at the end
03:08:15 <zid> sounds like effort
03:08:26 <geist> oh but letterkenny is so funn
03:08:30 <geist> funny you should watch it anyway
03:08:42 <zid> yea I see it on imgu sometimes, I'm watching SE1E01
03:10:12 <geist> https://www.youtube.com/watch?v=Kty5ZxlX5GQ is a pretty good standard scene
03:10:22 <geist> it's a word ballet
03:10:29 <zid> I'm watching it!
03:10:55 <graphitemaster> Buddy, you couldn't roll a tire down a fucking hill. You're fucking ten ply. You gonna go play a game of five on one?
03:11:06 <graphitemaster> +1 LK
03:12:11 <graphitemaster> Read this today and laughed
03:12:13 <graphitemaster> > It was developed by Sony, Toshiba, and IBM, an alliance known as "STI".
03:12:25 <ybyourmom> lul
03:15:26 <geist> https://www.youtube.com/watch?v=Z0sq3T5fErQ this one also sold me on LK
03:16:44 <ybyourmom> I only watch things that are extremely mainstream like Game of thrones and breaking bad
03:16:54 <ybyourmom> If most people don't like it, then it can't be that good
03:17:18 <Kazinsal_> got only lasted 8 seasons because of the tits
03:17:20 <ybyourmom> I'm 100% good sheeple
03:17:22 <zid> If most people like it it's shit
03:18:36 <geist> graphitemaster: Cell processors, eh?
03:18:53 <graphitemaster> Reading about the worlds greatest CEO
03:18:58 <zid> the ooya ceo?
03:18:59 <graphitemaster> Lisa Su
03:19:44 <geist> are you in love with AMD right now?
03:19:47 <zid> This sister in letterkenny is ridiculously attractive
03:20:05 <graphitemaster> geist, not right now, they reported losses this quarter and the stock prices dropped like $4
03:20:13 <graphitemaster> so now I'm hurtin'
03:20:14 <Kazinsal_> graphitemaster-chan and dr. su-sempai
03:20:24 <geist> oof, yeah i got some AMD stock a while back
03:20:28 <Kazinsal_> japan's newest weird romcom
03:20:38 <graphitemaster> Lisa Su definitely fucks
03:20:51 <geist> that's Dr Su
03:21:16 <ybyourmom> graphitemaster: Hmm, she seems to be performing well -- according to wikipedia she took on the CEO role in Jan 2012, and the stock chart shows that almost immediately the company went on an up and up
03:21:27 <graphitemaster> Of course, she fucked dude
03:21:30 <graphitemaster> *fucks
03:21:39 <ybyourmom> what does that mean in this context?
03:21:48 <graphitemaster> This guy doesn't fuck ^
03:21:53 <geist> eh?
03:22:21 <Kazinsal_> graphitemaster has been taking experimental soviet synthetic mescaline
03:22:36 <zid> Can confirm that Kazinsal_ fucks
03:22:40 * zid walks with a lump
03:22:44 <geist> and yes i've used eh twice in the last 10 minutes
03:22:47 <geist> i've been in canada too long
03:22:49 <zid> also a limp, the lump was a mis-aim
03:22:58 <Kazinsal_> oh fuck yeah bud
03:23:01 <zid> geist: you're using it like normal people tho
03:23:03 <graphitemaster> geist, My sabbatical depends on AMD stock behaving reasonably
03:23:03 <ybyourmom> geist: on vacation?
03:23:11 <zid> canadians use it where it isn't needed eh
03:23:12 <geist> heading back tomorrow
03:23:22 <geist> zid: hmm, true
03:27:18 <graphitemaster> geist, where you at in canada
03:28:14 <ybyourmom> the north
04:01:43 <johnjay> i read a story from someone who used mescaline
04:01:51 <johnjay> they said they just had a cactus and broke a piece off
04:03:25 <ybyourmom> what's that about meth?
04:20:08 <johnjay> ybyourmom: meth.. not even once
04:20:43 <geist> graphitemaster: the far east
04:21:20 <geist> well, mostly far east
04:22:29 <graphitemaster> that tells me very little
04:22:53 <klange> geist is in Canuckistan?
04:23:04 <graphitemaster> like in terms of cardinal directions, you've uncovered 25% of your whereabouts assuming the earth's magnetic poles are perfectly equal
04:24:22 <zid> I didn't know canadia had a far east
04:24:30 <zid> It has a west and a cheese section
04:25:47 <klange> Newfoundland I guess would be the very far east... the mostly far east is maybe Montreal or something?
04:26:45 <graphitemaster> Well if we're to go by where there's Google offices. It would have to be Ottawa or Montreal
04:27:15 <zid> what about arm offices
04:27:59 <graphitemaster> There's one ARM Holdings office in Vancouver, which is not very east
04:28:10 <zid> What about le ARM
04:28:17 <graphitemaster> le ARM?
04:28:18 <zid> do they have any in les candias
04:28:31 <klange> East? I thought you said Weast!
04:28:33 <graphitemaster> That's not how French works
04:28:37 <klange> "That's west, Patrick."
04:28:58 <graphitemaster> You can't just prefix words with "le" and say them in a French accent
04:29:06 <zid> Sure you can
04:29:26 <graphitemaster> If you want to get murdered in Montreal sure
04:30:10 <zid> They wouldn't catch me, they're all les retards
04:30:34 <zid> I'm a le animal in a fight, my fighting style is le ideal
04:31:12 <zid> My le arrogance knows no bounds, le preseverance of my le intolerance to the french also
04:31:24 <zid> If they're lucky I won't have to call them a le ambulance
04:31:49 <graphitemaster> The good news is ambulance is already a French word
04:31:53 <zid> all of them are
04:31:56 <zid> that was the joke
04:32:17 <graphitemaster> Oh, I didn't know there were that many
04:32:55 <graphitemaster> The thing to watch out for in French Canada is that not even they get French words right some time. Like if you put a French Canadian in France they wouldn't get really far because they have some really butchered slang
04:33:01 <graphitemaster> It's like a completely different language
04:33:06 <zid> it's.. a dialect
04:33:23 <graphitemaster> I would argue it's more than a dialect
04:33:24 <zid> I deal with that with americans all the time
04:33:27 <zid> dumbing my speech down
04:33:37 <graphitemaster> It's like a culture thing
04:34:13 <graphitemaster> France french is more ... posh to say Canada french
04:34:22 <zid> Like I just said :P
04:34:25 <graphitemaster> :D
04:34:47 <graphitemaster> That being said. I think Canada French people are some of the most honest people on earth.
04:34:55 <zid> I hear they're all giant assholes
04:35:10 <graphitemaster> Oh they're assholes but they're honest assholes
04:35:22 <adu> zid: are you British?
04:38:02 <graphitemaster> I think most Americans find Canadians very _loud_. I've been told I'm very loud from all my American friends but never my European friends or Canadian friends.
04:38:17 <zid> an *american* called you loud?
04:38:19 <zid> fucking hell
04:38:33 <graphitemaster> Yeah I don't get it either.
04:38:45 <graphitemaster> Not just one American, many.
04:39:06 <adu> I live in Washington DC, but I consider myself Japanese
04:39:47 <zid> Identifying american tourists is really easy
04:39:50 <zid> you hear them before you see them
04:40:23 <graphitemaster> I find that Americans are more obnoxious than they are loud.
04:41:03 <graphitemaster> Like just the idea of human decency seems lost on a lot of tourists I've seen
04:41:55 <zid> We just go to new places to drink
04:42:26 <ybyourmom> America is the greatest nation on earth
04:43:59 <graphitemaster> Like, you're out abroad visiting say Japan where it's seen as extremely impolite to do certain things like not walking around in public eating or smoking, asking fucking geishas / maikto for SELFIES, and not taking off shoes and using the provided slippers
04:44:03 <graphitemaster> I don't get it.
04:44:19 <zid> americans view other countries like zoos
04:44:43 <graphitemaster> It's like they just go and visit places and don't care for learning their culture and assimilating for the short period they're there
04:44:54 <graphitemaster> They just want to be themselves someplace else
04:46:03 <ybyourmom> Well america is an individualist culture lol, they don't care about "culture"
04:46:20 <adu> I think that's called culture shock
04:46:40 <graphitemaster> Canada has a very simple culture. Come here but be polite. Try the poutine and breakfast for dinner at least once and experience a game of hockey.
04:47:00 <zid> It's a simple culture for simple people
04:47:31 <zid> It's honestly amazing canadians are as well adjusted as they are, being half american and half french
04:48:12 <graphitemaster> I think a lot of it has to do with the fact that we keep ourselves busy with yard work.
04:48:27 <graphitemaster> If it's not a shit ton of leaves to deal with it's a shit ton of snow.
04:48:38 <zid> shovelling all that show must take a lot of your day up
04:48:43 <ybyourmom> Also does anything important happen in canada?
04:48:51 <zid> a moose once bit my sister
04:48:56 <graphitemaster> I mean does anything important happen in America?
04:49:31 <graphitemaster> I'm pretty sure both of our countries could cease to exist tomorrow and China would be ready to fill our shoes pretty much immediately.
04:49:51 <graphitemaster> The only thing missing in the assembly stage is the "made in USA" sticker that can only be done in the USA.
04:50:09 <graphitemaster> Not much of a loss if you ask me.
04:51:18 <zid> At least china doesn't pretend it has worker freedom and things :P
04:51:50 <zid> Country needs more steel? Okay, we're opening 10 new steel mills, 10 party members run them, all the profit is ours and there's no red tape
04:51:55 <zid> not surprising they're doing so well
04:52:53 <ybyourmom> "There's no red tape"
04:53:21 <graphitemaster> zid, No self respecting Canadian shovels all snow. We have snow blowers. They have a steel auger that spins around and eats the snow, then shoots it out the top in any direction I point it, everyone pushed everything out onto the street where we get big snow banks and the streets get thinner until they're only a single lane and then the city comes along and collects the snow where it gets transported to a SNOW DUMP where it sits and
04:53:21 <graphitemaster> eventually melts once summer comes by - then all the poor people collect the garbage that was in the snow from there, like unclaimed Tim Hortons roll'up the rim prizes, Canadian Tire money and hockey tickets.
04:54:21 <graphitemaster> That's the most Canadian chain of events I can muster
04:55:09 <zid> I did see a thing about the ploughs going out when it snowed
04:55:15 <zid> efficient like and high volume
04:55:22 <yrp> youre exclusion of the west coast of canada in your narrative is typical canadian bias
04:55:38 <zid> because we had a bit of snow and people were moaning that things had shut down and pointed to canada
04:56:04 <zid> then they pointed out it cost a million per guy and they had like 20 guys doing one city just to get arterial routes clear
04:56:08 <zid> and they shut up
04:56:27 <graphitemaster> zid, The highways are like that, 3am on a Monday, there are snow trucks in every lane, staggered in such a way that snow travels from one to the other while driving so it all ends up on the side of the road, they all move at high speeds and it's like tandem bike riding, super impressive
04:57:38 <zid> yerp
04:58:01 <zid> those guys pay for themselves easy, wouldn't here
04:58:07 <zid> so anything more than a fleck of snow and we shut down
04:59:18 <graphitemaster> you should see how we snow plow the train tracks
04:59:45 <graphitemaster> we have the northlander, super cool stuff
04:59:54 <zid> I want a nuclear icebreaker
05:00:03 <zid> I'll name it shirase and break some ice
05:00:37 <graphitemaster> http://www.trainweb.org/onrailfan/100_0409%20(Medium).jpg
05:00:40 <graphitemaster> This thing fucks
05:00:43 <klange> I want a car. I bought a car, but I don't have it yet. I want my car :(
05:00:50 <zid> oh ya she's a beaut
05:00:59 <zid> but she's no nuclear icebreaker
05:01:18 <graphitemaster> No but she can tear an asshole in a snowman
05:01:20 <zid> https://upload.wikimedia.org/wikipedia/commons/f/f5/Yamal_2009.JPG
05:01:31 <graphitemaster> That's just not fair
05:01:52 <zid> The russian letters are what sell it
05:02:19 <zid> icebreakers don't just plow into the ice, they ram it and ride up onto it then crush it with their weight
05:02:50 <graphitemaster> That's not Russian letters. Those are the letters of another ship called the HAAMR (hammer) that ran INTO that boat, the AAMR fused to the side hence it's backwardsness
05:03:02 <graphitemaster> :D
05:03:31 <graphitemaster> Didn't even leave a dent
05:03:36 <graphitemaster> Just some lettering
05:04:30 <zid> Anything nuclear powered is just better
05:04:54 <graphitemaster> The Russians don't have the best track record when it comes to nuclear power
05:05:05 <klange> https://en.wikipedia.org/wiki/Yamal_(icebreaker)
05:05:19 <klange> Nice boat.
05:05:33 <graphitemaster> klange, what car did you buy?
05:05:37 <klange> Civic Type R
05:05:42 <zid> ooh nice
05:05:50 <graphitemaster> You're really embracing the ricer meme
05:05:57 <zid> going to drift it down some mountain passes blaring gas gas gas?
05:05:58 <graphitemaster> time for a spoiler and a Gentoo install
05:06:07 <klange> It comes with the spoiler.
05:06:13 <klange> zid: it's front-wheel drive, so no
05:06:22 <graphitemaster> Well then time for a Gentoo install.
05:06:46 <graphitemaster> Holy shit that car is $42k CAD
05:07:11 <graphitemaster> Why so expensive
05:07:20 <klange> I paid 60k CAD after all the options and local taxes
05:07:23 <zid> klange: Your problem being?
05:07:35 <klange> because it's a race-tuned 2.0L turbo
05:07:40 <zid> FF + spoiler sounds extra fun
05:07:42 <graphitemaster> klange, which body style, 3-door hatchback, 5-door hatchback or 4-door sedan?
05:07:50 <zid> gotta lift those front tires off so you have no grip
05:07:54 <klange> type r is only 5-door hatchback
05:07:56 <zid> helps you drift
05:09:57 <graphitemaster> I mean if you like it, why not. In my personal opinion which doesn't matter but I feel the need to state anyways - I think the car looks u gly.
05:10:21 <zid> what
05:10:23 <klange> I like the styling, it's exactly my sort of over-the-top aggressive look.
05:10:28 <zid> how the fuck is it ugly
05:10:43 <zid> klange: did you get it in the wrong colour, or black?
05:10:49 <graphitemaster> It looks like a fucking HotWheels toy.
05:10:57 <zid> Yes, and when it goes TSSSS
05:10:58 <klange> I got it in white with intent to cover it entirely in vinyl wraps.
05:11:06 <zid> it looks perfectly correct
05:11:28 <zid> if it went TSSS and looked like a vw beetle you'd get a lot of dissonance
05:11:30 <graphitemaster> Like when I see this I think the man behind the wheel probably doesn't fuck
05:11:34 <zid> klange: waifus?
05:11:40 <klange> waifus
05:11:42 <zid> nice
05:11:53 <zid> When are you picking me up
05:12:12 <klange> tasteful, Super GT-style Mikus
05:12:19 <zid> tasteful indeed
05:12:21 <graphitemaster> Holy shit klange hasn't gone full ricer, he's gone full incel.
05:12:24 <zid> can we get a megumin bychance?
05:12:27 <klange> whenever you land in tokyo [as long as that's after September]
05:12:45 <zid> I'm sorely tempted now
05:12:53 <zid> the moment I found someone to countersign a passport application
05:13:44 <graphitemaster> If you're going to Japan you need to try out the synthetic mdma replacement they have dude, it's something else.
05:14:55 <graphitemaster> Japanese rave culture kind of stinks though
05:15:35 <klange> we didn't get the 60s here
05:17:01 <zid> You know what they did get though
05:17:05 <zid> eurobeat
05:18:27 <graphitemaster> The venues have long lines, long wait times, very little drug use, silly entrance fees and drugs and alcohol are extremely expensive.
05:18:43 <zid> soo... a club? :p
05:19:06 <graphitemaster> I can't get over the fact that something as simple as a beer costs 8x more than what it costs here
05:19:19 <klange> Does it?
05:19:22 <graphitemaster> Yeah
05:19:29 <zid> pint in a club is anywhere from £4 to £40 here
05:19:30 <klange> The standard here for clubs is ¥2000 cover and ¥500 drinks.
05:19:59 <klange> €16.50 €4
05:20:12 <zid> yea seems normal
05:20:51 <graphitemaster> The aura lounge here has $1 beer nights
05:21:17 <graphitemaster> That's 80 yen iirc
05:21:26 <klange> 110
05:21:28 <zid> brits would die if they offered that
05:21:40 <zid> It's criminal to price alcohol that low
05:21:51 <klange> wait where is "here" for you I forget, is that AUD? then sure, 80
05:22:04 <klange> It actually is criminal to price alcohol that low here.
05:22:24 <graphitemaster> Osaka
05:22:29 <zid> they changed the law on that here recently but I don't know it, not sure it applies to open stuff
05:22:30 <zid> or beer
05:22:38 <graphitemaster> also here is Canada for me
05:22:49 <graphitemaster> I'm comparing to the prices I experienced in Osaka
05:22:50 <zid> canadian rubles
05:23:00 <klange> I know nothing about Osaka, maybe they're dumb over there.
05:23:07 <klange> Osakans are the hicks of Japan after all.
05:23:34 <zid> klange has a thick osaka accent
05:24:25 <graphitemaster> I literally was like "okay I need to visit Japan where should I go" and I was like "well if I'm going to go I need to try this MDMA thing my Japan friend told me about" to which I was like "okay so I need to find a place in Japan with the best night life and least amount of cops" and I did some research and Osaka was the best hit
05:24:50 <graphitemaster> So that's how that worked out
05:25:32 <klange> I imagine you went to the sort of place that has drugs, which was run by the yakuza, and they probably charge much higher for everything
05:26:25 <klange> because you're not paying to get into a club and drink beer, you're paying to be in a place that will let you do your synthetic mdma without getting arrested (and deported, and banned permanently)
05:26:30 <graphitemaster> Possibly. But I didn't get raped, murdered or kidnapped so it's not all that bad. I'd consider it a success.
05:26:53 <zid> no rape? dang, u ugly?
05:26:56 <klange> Of course, the Yakuza aren't savages. Now the Nigerian touts in Roppongi... that's a different story.
05:27:58 <zid> klange: How likely are we to pick up some schoolgirls in your waifuwagon
05:28:15 <zid> jk with the socks, I like the socks
05:29:47 <graphitemaster> I tried okonomiyaki "savory pancakes" and I really liked it. It's like the Japanese version of an omelette I would argue or a pizza. I was really impressed by it actually, tried many different ones while I was there. I'm not a fan of sea food in general and it's hard to avoid it in Japan sometimes.
05:30:11 <zid> what are the socks called btw
05:30:16 <klange> which socks?
05:30:18 <klange> graphitemaster: my man
05:30:23 <zid> the baggy ones
05:30:32 <zid> jk always be wearing them in media
05:30:46 <graphitemaster> klange, you like it too I take it?
05:30:49 <klange> you know those aren't a thing and haven't been since 90s so if you find a girl in a school uniform wearing them she's in her late 30s and nostalgic about her youth
05:31:06 <klange> graphitemaster: love me some okonomiyaki
05:31:07 <zid> boo :(
05:31:24 <zid> ah well if she's 30 and japanese she'll look 18 anyway
05:31:47 <klange> and she'll be legal, so it's really it's just a win from all fronts
05:31:53 <zid> https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTVVrb6TB5uRZvqlFSB28up1ej1xLQ-oXmQ3YwbvU5WMJLQlJr8ng
05:31:54 <graphitemaster> they make it right in front of you, it's so cool or you can make it yourself but ask for help (I never tried, didn't want to fuck something up and be a disgrace_
05:32:01 <zid> so yea does this have a name
05:32:32 <klange> i don't recall
05:32:33 <klange> probably
05:32:40 <klange> I don't pay attention to old fashion trends.
05:32:45 <zid> random dictionary lookup for socks gives ルーズソックス
05:32:47 <zid> thanks dictionary
05:32:55 <zid> looks like it might be right
05:34:17 <graphitemaster> I feel like okonomiyaki would be the food of choice for a stoner in Japan
05:34:41 <graphitemaster> It just has all the right options for customization and experimentation
05:34:57 <klange> Nah, probably something from a cheap place like Sukiya.
05:35:48 <zid> love ya too klange
05:37:17 <graphitemaster> yo what is up with tamagoyaki, why is it square
05:37:37 <zid> you don't have square eggs?
05:37:49 <klange> It's usually a bit round, like slices of bread.
05:37:55 <graphitemaster> yeah a tad
05:37:59 <graphitemaster> but like the square pan
05:39:23 <graphitemaster> I guess it's easier to roll egg when you have a flat square of egg than it is a non-square sheet of egg
05:39:54 <graphitemaster> probably all about that consistency and not having to cut off too much to make that dense clean shape
05:40:22 <graphitemaster> that's so creative
05:41:44 <graphitemaster> zid, no, but we do have square watermelon
05:42:07 <zid> It's easier if you make the hen square first
05:42:13 <zid> then your tamago are square to begin with
05:42:28 <graphitemaster> what came first, the chicken or the square?
05:42:40 <zid> Plato
05:42:58 <zid> Behold, a featherless ideal square, he said, holding up a chicken
05:44:21 <klange> I thought all animals were assumed to be ideal point masses for the purpose of simplicity.
05:45:18 <graphitemaster> "Fearing the sharpness of the square egg. Zeus rounded the corners, condemning us to spend our lives dealing with that bullshit" - Plato, The Symposium
05:45:38 <graphitemaster> Yep, it's right there folks. Plato knew it all along
05:47:21 <zid> I feel like I should get a high-five for that plato's theory of forms + diogenese joke
05:49:13 <zid> uncultured swine
05:49:25 <graphitemaster> I mean it was good but Plato was also a bit of an idiot so you were really just working against yourself.
05:49:34 <graphitemaster> You broke even
05:50:22 <zid> diogenese however was a pimp
05:51:44 <graphitemaster> Diogenes was a straight up anarchist
05:54:50 <graphitemaster> I could just imagine if he was around to see today how he would act
05:55:03 <graphitemaster> Thermonuclear war
06:22:34 <immibis> I can't see why I got pung, so I'm going to assume it was a spambot
07:46:02 <squaremelon> graphitemaster: You can do millCPU alike things with intel/AMD chips too.
07:47:39 <squaremelon> graphitemaster: Looking at your comments in the past, i think you know that there are uniform distributions, where data can be saved to 32-bit or 64bit variables, with simple arithmetic
07:49:04 <Mutabah> martm? Long time no see if so
07:52:44 <squaremelon> Yeah i've been busy developing stuff, having realised how bad linux optimizes stuff.
07:53:28 <aalm> did this "08:41:45 graphitemaster | zid, no, but we do have square watermelon" trigger you?
08:06:42 <squaremelon> Not in thought, but i am used to looking what graphitemaster talks though, technically i know my own too.
08:06:54 <squaremelon> he has a good thinking
08:10:54 <squaremelon> i recently tried to post this kind of distribution of numbers, to store opcodes as boolean index numbers in separate say 64buffers or regs or cache variables.
08:12:59 <squaremelon> 2+6+14+22+34 this is what i have in notes, you checksum it offline into memory variable, 2 and 6 need to be there always, others can be arbitrary but need to be 4aligned for instance
08:16:47 <squaremelon1> 78 has two bit in that is dummy, then 78-2 hasn't got two bit in so those are dummies. then 76-6=70 has two bit in, 70-10 hasn't etc on so forth
08:20:32 <squaremelon1> this can be done on top of tomasulo on most cpus to bypass fetching instructions at runtime, so tomasulo also bypasses decode, so results are huge. Needs some in kernel plumbing for scheduling that way.
08:22:35 <squaremelon1> there is also atom and arm in-order queues, atom has little different ways of doing that and ARM in-order versions have tightly coupled memories for fast procedures
08:27:27 <squaremelon1> I have all the hardware in the world covered, gpus and cpus all they should work finely when driven correctly, subtle differences though, the point is driver nor os under linux does nothing so far to optimize code for them.
08:28:30 <aalm> 42 ?
08:28:40 <klange> 42 is the answer
08:31:29 <squaremelon1> listen i have piles of work to do, but not so much time, what do you mean , i gotta go soon? what answer?
08:31:58 <squaremelon1> you need to look at second bit to see, if that index upon subtracting with 4 from the last remainder is in
08:32:09 <squaremelon1> to see if that opcode is at that index
08:33:43 * Mutabah squints
08:33:51 <klange> 42 is always the answer, to everything.
08:34:39 <Mutabah> I can't make out any sort of logical flow here, mabybe it's just your poor grasp of the english language?
08:35:45 <squaremelon1> normally you have 64regs different kinds or say upto 64 opcodes, you checksum the values into 64buffers, and runtime run 64fast bitwise operations on the current index on those buffers
08:35:54 <squaremelon1> to see what opcode to run on tomasulo
08:37:18 <squaremelon1> if one of the buffers have two or 2 in power of 1 in on that index then this is that opcode
08:37:56 <klange> "What's 7×6?" 42. "How many isomorphism classes are there of simple oriented directred graphs of 4 vertices?" 42. "What is the third primary pseudoperfect number?" 42. "How many ways can you represent 10 as a sum of positive integers?" 42. "What is the atomic number of molybdenum?" 42. "What constant does glibc's memfrob XOR with?" 42. "What is the decimal representation of the ascii character '*', the comm
08:37:57 <squaremelon1> on tomasulo bitcomparsion like or xor and and are very fast
08:38:02 <klange> on representation of a wildcard meaning everything?" 42. "How many days is the default Windows domain password expiration policy?" 42.
08:38:10 <squaremelon1> bitshifts are depending on the amount of shift though
08:38:31 <squaremelon1> and sub and add are also fast until you do not generate or propgate carries or borrows
08:38:47 <squaremelon1> so those 64ops are highly fast taking couple of gate delays on chips max
08:38:59 <squaremelon1> each
08:39:01 <aalm> .theo
08:39:01 <glenda> Well, is mentioning this even important?
08:39:37 <Mutabah> glenda always on point
08:43:49 <squaremelon1> similarly it is done with constants the same way as with instructions, you generate the constants with arithmetic, which is lot faster then loading them from ram.
08:44:06 <klys> oh dear
08:47:11 <klys> so eh, I have this 3 TB disk: https://paste.debian.net/1093689/
08:48:01 <squaremelon1> our countries education is very strong, we have learned this all in schools, i only quit from university, but the all important stuff was allready done luckily
08:49:34 <klys> what do you get when you multiply six by nine
08:52:13 <Nuclear_> klys: I don't have a single disk without pre-fail and old_age... they all work fine
08:52:32 <klys> it sure seems to generate a bunch of seekcomplete messages
08:53:05 <klys> well
08:53:06 <Nuclear_> hmm your Reallocated_Sector_Ct is a bit troubling
08:53:15 <Nuclear_> no nevermind, wrong line
08:53:29 <Nuclear_> that was the seek error rate I was looking at
08:53:41 <klys> 100
08:53:45 <klys> yeah
08:54:44 <Nuclear_> I have 200 in all three disks
08:56:55 <klys> nuclear_, it's basically spitting this out to dmesg all the time: https://paste.debian.net/1093801/
08:58:27 <squaremelon1> though i say this the person going through such educational programs need to be smart to put that info into practice, i am the rare one there the most violated by ridiculous morans in estonia, however those violators and illegal conspirers will never legally compete with me no matter if there was an arrangement where i got injured, violation is not a route to skills
08:58:29 <squaremelon1> bye
08:59:41 <Nuclear_> klys: that seems like it means it's losing the disk and re-attaching it
08:59:51 <Nuclear_> I don't have much experience with sata and its protocols tbh
09:01:16 <Nuclear_> it's hard to know if that's an issue with the disk, or the cable, or the controller on the motherboard...
09:01:44 <Nuclear_> I'd start by switching sata ports, and also adding another sata disk to see if you get the same errors with that one or not
09:02:07 <Nuclear_> and if it's definitely the disk, just get a new one and copy everything useful over before it dies :)
09:02:53 <klys> well I have backups from this disk, I'm pretty compulsive about making backups lately.
09:06:10 <Nuclear_> that's a good habit I never managed to acquire ...
09:06:34 <klys> well I have another 4 TB external and new 3 TB internal elsewhere
09:06:38 <Nuclear_> I have set up cronjobs for some really important stuff, but I'll lose most "medium-importance" files if I lose a disk :)
12:34:15 <squaremelon> klys: i do not pay much attention what other guys say and complain about, it is how it is. 6*10-6=54 but it is part of what we call multiplication table, this answer needs to come with split seconds
12:34:55 <squaremelon> those tests we did in 4th grade maybe
12:36:34 <squaremelon> fist you learn at youth age how to calculate until hundred
12:37:04 <squaremelon> then you apply this to calculate bigger values obviously
12:38:52 <Nuclear_> squaremelon: when you get to reading, pick up hitch-hiker's guide to the galaxy
12:44:32 <squaremelon> the procedures i gave can be optimized for holes in the 4value alignment doing fast calculations all the way.
12:52:13 <squaremelon> i had widthdrawn any patent ideas, since i see to many options to do things like those.
12:52:21 <squaremelon> *too
13:02:38 <heat> hey so I'm trying to fiddle around with arm64 and qemu's virt machine type
13:02:41 <heat> any tips?
13:03:06 <heat> i've seen people use u-boot and I'm not sure if I should use it
13:04:45 <heat> and x0(the register where I think the DTS should be passed) is 0 so I'm not sure if it's invalid and I'm misunderstanding things and it's not in x0
17:09:01 <johnjay> heat: qemu completely baffles me
17:09:10 <johnjay> i'm hoping maybe i can figure out some tricks with it from the osdev wiki
17:09:33 <johnjay> the qemu documentation is manpage-level of ridiculous
17:12:01 <klys> qemu-system-i386 -device help | less
17:25:21 <johnjay> klys: the issue is you can't always go from those to a specific CPU model
17:25:44 <johnjay> unless that's not the point and each one is just emulating core functionality from a family of CPUs like IvyBridge or SandyLake
17:26:11 <johnjay> in fact sometimes typing qemu IDs into google i just get qemu page as a source!
17:28:28 <klys> johnjay, what's a qemu id you're curious about
17:28:55 <johnjay> from the x86 menu?
17:29:04 <johnjay> it actually looks more documented than the arm and ppc ones
17:29:10 <johnjay> but like it just says "Haswell"
17:29:18 <johnjay> or let's see
17:30:22 <johnjay> i guess there's no Pentium4 option in qemu
17:30:28 <johnjay> how about this one
17:30:30 <johnjay> x86 IvyBridge Intel Xeon E3-12xx v2 (Ivy Bridge)
17:32:34 <klys> https://en.wikipedia.org/wiki/Ivy_Bridge_(microarchitecture)#Ivy_Bridge_features_and_performance
17:33:33 <johnjay> ok. i see Xeon E3-1280 is a real processor
17:33:48 <johnjay> but most of the time when i google those names i can't find a real processor
17:34:00 <johnjay> or it's not listed. like the one that just says "Haswell"
17:34:05 <zid> v2? disgusting
17:34:16 <johnjay> x86 Haswell Intel Core Processor (Haswell)
17:34:16 <zid> all haswells are haswells though
17:34:31 <klys> https://en.wikipedia.org/wiki/Haswell_(microarchitecture)#New_features
17:34:33 <zid> why would you want it more specific than haswell
17:35:08 <zid> klys: how long for your car now
17:35:33 <johnjay> that's what i'm asking
17:35:43 <johnjay> some of the qemu entries have 5 different ones like for opteron
17:35:54 <zid> opteron was a huge range of cpus
17:35:55 <johnjay> some just one word description like with Haswell
17:35:56 <klys> zid, wotzdis I don't own a vehicle
17:36:04 <zid> I meant the other kl that's why
17:36:09 <klys> ok
17:36:11 <zid> I'm very bad at pressing tab
17:36:16 <zid> https://en.wikipedia.org/wiki/Opteron
17:36:17 <johnjay> and wiki defines E7 and E5 varieties of Haswell too
17:36:31 <zid> still the same microarch
17:36:53 <zid> E7 just means quad socket capable I think
17:37:13 <zid> or at least the -4xxx are only under E7
17:37:15 <johnjay> so the logic behind the qemu IDs is just whatever seems intuitive?
17:37:26 <zid> or you know, what it is they're pretending to be...
17:37:35 <johnjay> e.g. this line of cpu had 5 distinct things in it but this one they're basically all the same?
17:38:14 <zid> My desktop is a sandy bridge-ep
17:38:39 <zid> that makes it a i7-38xx, an e5-16xx, etc
17:38:52 <zid> but they're all the same chip, just clocked differently and with ecc enabled or disabled, etc
17:39:11 <johnjay> rats i don't have qemu-system-arm on this machine to check
17:39:31 <johnjay> ok
17:39:49 <johnjay> that would make sense if some models on wikipedia are just the same chip but clocked differently
17:39:58 <zid> that's what a microarch *is*
17:40:09 <zid> the thing that behaves like it behaves
17:40:11 <zid> doesn't matter what sku it is
17:40:32 <johnjay> i just assumed qemu would have to pick particular chips to emulate
17:40:48 <johnjay> like each one would be chip model #12383482323890 or something
17:40:55 <johnjay> as opposed to "this is a generic H aswell"
17:41:08 <zid> have you seriously not bought a cpu before?
17:41:18 <johnjay> i've bought 2 or 3
17:41:29 <zid> how did you know what'd work in your machine? guessing?
17:41:30 <johnjay> usually when i have to build a new PC
17:41:41 <johnjay> i looked up the socket type of the motherboard
17:41:49 <johnjay> then googled for cpu #'s that fit that socket type. then bought it
17:42:10 <zid> so you're saying.. there's some kind of overall structure to the models? interesting
17:42:20 <zid> I thought they all just had a random part number
17:42:27 <zid> (this is what you sound like)
17:42:34 <johnjay> idk what i sound like
17:42:38 <zid> like that
17:43:02 <johnjay> if the qemu thing had more explanation as to what the included devices were and why they were included
17:43:09 <johnjay> i don't think it would be as confusing
17:43:13 <zid> yea it's really hard to figure out what a qemu config is
17:43:15 <zid> without just running it
17:43:24 <zid> I ended up checking source it was faster
17:43:36 <johnjay> right
17:50:26 <geist> johnjay: qemu is really not particularly good at emulating a particular core to the T
17:50:40 <geist> it's mostly about having the particular set of features enabled for that particular cpu
17:50:45 <geist> which is why it has a weird selection
17:51:07 <geist> but it's not a hard emulator in that regard, it's fairly sloppy with how it presents the machine to the guest
17:51:14 <geist> which is generally fine, since that's not usually its main goal
17:51:58 <geist> and what zid said, check the source
17:53:56 <johnjay> i see
17:53:59 <johnjay> makes sense
17:54:03 <adu> qemu should support soc and pch
17:54:33 <johnjay> the main goal is just to give you access to features generally associated to that class of CPU?
17:55:34 <adu> I can't speak for geist, but I'm more interested in the southbridge emulation
17:56:00 <geist> yah it's pretty floaty. there's only really two models: 440bx (pre pci-e) and q35 (pcie)
17:56:08 <geist> after that its' not really picking up new features as much
17:56:29 <geist> the primary goal of qemu on x86 seems to be being a useful KVM host machine
17:56:44 <geist> probably wasn't the original goal, but that's the role it seems to fill nowadays
17:57:05 <zid> pci-e is pretty universal
17:57:12 <zid> cpu + ram + devices is what you need for a machine
17:57:17 <zid> and all devices are pci-e on pc
17:57:25 <geist> basically
17:58:27 <adu> but if you're using qemu to unit test coreboot, pretty useless to only test the ich code, and leave the pch and soc code untested
17:58:41 <geist> butthat's the reality of supporting a modern PC. there's the basic legacy PC stuff which may or may not completely be there anymore (ie, no legacy IDE, no floppy, etc) and then a bunch of relatively standard peripherals on PCI
17:59:43 <zid> linux has dropped the legacy floppy controller code because there's nobody to test it
17:59:56 <zid> everybody who cares about floppies uses them with usb floppy drives
18:00:02 <zid> not ata cables with the little twist in
18:00:14 <adu> I have an old mac with a floppy drive, but I haven't turned it on in 20 years
18:00:16 <geist> it's not dropped, but it's on the death list
18:00:28 <zid> it's dropped unless someone steps up to not have it be dropped :P
18:00:45 <geist> right, so it's really an open invite for some folks from various VM companies to support it
18:00:51 <zid> and anyone using a floppy drive *legit* probably isn't running linux 5.xx
18:00:52 <geist> since there's still some value to it potentially in VM space
18:01:15 <geist> but keeping it supported and bug free (read as exploit free for VMs) may still be useful
18:01:32 <zid> anything that both has a floppy drive, and runs linux 5.xx codebase probably doesn't exist in reality
18:01:46 <squaremelon> I have several machines with floppy drive, and i used them extensively in the past, dead technology RIP floppies, i do not miss them.
18:01:48 <zid> it'll all be running 2.4 or 2.6
18:01:49 <geist> or is some niche thing, like my old pentium 3
18:02:14 <geist> which is running current linux, has a floppy, but has no real reason to exist or be running modren linux
18:02:18 <j`ey> it wasnt deleted right?
18:02:23 <j`ey> just unmaintained
18:02:29 <geist> no, just marked as unmaintained
18:02:29 <squaremelon> pentium 3 is allready very strong machine, lot stronger then people envision
18:02:42 <squaremelon> *than
18:02:43 <zid> a pentium 3 is really not very good
18:02:54 <zid> even clocked to 1ghz they're bad compared to modern stuff
18:02:55 <geist> yah believe me, modern software does not run very well on it
18:03:06 <zid> a 1ghz modern arm would absolutely destroy it
18:03:13 <geist> modern software tends to punish old machines even more so than just being N times slower because of clock rate
18:03:23 <geist> they also tend to completely blow the caches on the old machines
18:03:30 <squaremelon> it has queues prolly 40 rob entries as i remember, pentium 4 has more 127
18:03:30 <geist> so they have an additional multiplier of slowness
18:03:31 <zid> old software couldn't afford 'expensive' things like aslr, etc
18:03:42 <zid> so running 'modern' style software on old machines is basically not doable
18:03:56 * geist nods
18:04:13 <geist> you really feel it for things like python or stuff that does a lot of SSL (which is everything now)
18:04:15 <graphitemaster> I've argued that most of the "raw" device handling code in Linux should be removed. Like there's basically no reason to support floppies, cdroms, audo cards as discrete driver things but rather to provide a generic emulation interface ontop of USB instead for them - a few people have agreed with this sentiment actually. I feel like it would simplify a lot of things and also fix a lot of long-standing compatibility bugs and headaches
18:04:15 <graphitemaster> with things. I can count the amount of times an internal cdrom has worked on Linux for me on one hand (after excessive fucking around with kernel modules and configuration) while an external one via USB has always worked out of box
18:04:18 <squaremelon> pentium pro to pentium III should all be the same microarchitecture
18:04:20 <geist> that stuff just drags to a crawl on old machines
18:04:24 <zid> modern machines feel about the same speed as they felt *then*, and it isn't bloat, it's literally the fact everything is now encrypted and hardened and process isolated etc etc
18:04:34 <zid> it doesn't make a pentium 3 fas tthough
18:04:56 <geist> it's also bloat, and modern interpreted languages
18:04:59 <zid> internal cdrom always worked for me
18:05:11 <geist> things like package managers being written in python, probably running a bunch of O(N^2) algorithms, etc
18:05:28 <geist> may take 5 seconds on a modern machine, takes like 3 minutes on an older one, etc
18:05:34 <zid> that makes it a lot worse yea
18:05:38 <squaremelon> 40 rob entries is more than enough to make async queued code in the kernel
18:05:43 <zid> but even ignoring that a p3 is still slow :P
18:05:47 * geist looks at apt-get with a stink eye
18:05:55 * zid is upset portage is python
18:06:26 <graphitemaster> Resolving dependencies can be done in O(V+E) time. I've implemented it before. The problem is no one is smart enough like me - /r/iamverysmart
18:06:42 <geist> u r teh smeart
18:06:50 <zid> resolving deps is O(1)
18:06:56 <geist> y r u so smaert
18:06:59 <zid> if by resolving deps you mean 'the package has a list of deps' :p
18:07:22 <geist> oh who knows, but stuff like 'apt-get some thing' <churn for 45 seconds>
18:07:30 <zid> and anything else is dumbassery
18:07:30 <geist> or apt-get update <thinks for 3 minutes> etc
18:07:39 <zid> which I imagine there's a lot of going around
18:07:52 <geist> sure, no point optimizing beyond the point of it working well enough
18:08:02 <zid> idk why people assume "Do you have package x version >z" should take minutes to check :(
18:08:29 <geist> lots of stuff is also likely stored in compressed form too
18:08:38 <geist> so it may or may not need to go ungzip 100 files, etc
18:08:47 <zid> portage certainly doesn't
18:08:48 <geist> you feel it with man pages on old machines
18:08:52 <zid> it's a filesystem filesystem
18:08:54 <squaremelon> microarchitecture of 486 can be used too it has 32B prefetch queue, occomadting 10macro-ops or so
18:08:58 <zid> you just stat the package name
18:09:15 <graphitemaster> I mean sorting a list of dependencies in the order, e.g topological sorting, you can't just install a list of deps in any order, usually you install them in the order they're needed
18:09:20 <geist> also a really bad one is those silly helper tab completion routines on ubuntu or whatnot
18:09:29 <geist> you type tab, 50 python scripts run, etc
18:09:41 <geist> or worse, you type a command that's not there, so it goes and tries to helpfully suggest something from pportage
18:09:49 <zid> gross
18:09:53 <squaremelon> but 486 and pentium I are in-order cores with pipelining
18:09:57 * zid emerge -C oh-my-zsh
18:09:59 <geist> it's slick if you have a fast machine
18:10:06 <graphitemaster> You know about 3 million lines of JS get run in VSCode when you press `tab` for auto complete
18:10:10 <geist> nah, it's not a bash feature as it's a debian/ubuntu feature
18:10:16 <geist> they install a whole pile of helper scripts
18:10:21 <graphitemaster> How the fuck does it run 3 million lines of JS so quickly
18:10:24 <graphitemaster> It amazes me
18:10:26 <zid> geist: oh-my-zsh does it for zsh on gentoo afaik
18:10:28 <geist> computers are fast
18:10:41 <squaremelon> atom is more powerful in-order core that has post-decode queues for fast path
18:10:46 <zid> mod -U compinit or some crap in zshrc
18:11:32 <zid> 'autoload -U compinit promptinit' ther we go :P
18:11:35 <zid> I can tab complete git commands etc
18:11:47 <geist> oh dont get started on trying to use git on a slow machine
18:12:06 <geist> git as a data store is intrinsically based on the idea that you have a substantial amount of processing power at your disposal
18:12:15 <geist> it is a tool that could only be born in the modern age
18:12:28 <zid> now imagine it's written in bash :D
18:12:29 <graphitemaster> I have git aliases because I'm lazy as fuck, gc => git commit, gp => git push, gf => git fetch + git pull
18:12:45 <geist> gru => git remote update -p
18:12:55 <geist> gpr => git push --rebase
18:14:56 <squaremelon> pipelined superscalar is more expensive arch with scoreboard in hw, loosing some queue entries and alus due to expenses of doing nuts stuff in hw, VLIW is by far the sanest architecture
18:15:03 <graphitemaster> gfu -> git push --force
18:15:13 <squaremelon> but they were not able to understand how to run Itanium for instance
18:15:18 <zid> git help^C^C:q!
18:15:23 <geist> grh => git reset --hard
18:15:42 <graphitemaster> grhh => git reset --hard HEAD
18:16:34 * geist is drinking a Molson
18:16:41 <geist> and it's... what i expected
18:16:49 <graphitemaster> geist, you sir are a gentlemen and a scholar
18:16:59 * geist raises pinky finger
18:17:35 <graphitemaster> I don't really like it personally but it is a popular beer here
18:17:51 <geist> it's domestic beer
18:17:56 <graphitemaster> It's a really cheap beer though so I get why it's popular
18:18:00 <zid> never heard of it
18:18:02 <geist> right
18:18:11 <graphitemaster> Labatt is also very common
18:18:57 <graphitemaster> We have a craft microbrewer here called "Full Beard Brewing" and they make some really good stuff. A little more expensive because locally brewed and lots of variety so small batches and what not
18:19:08 <graphitemaster> However it's certainly far better mouth feel IMHO
18:19:14 <geist> yah, local craft brewing FTW
18:19:32 <graphitemaster> https://fullbeardbrewing.com/ Check out some of their stuff it's pretty cool
18:20:18 <graphitemaster> Aussie, Aussie, Aussie, Eh, Eh, Eh!! is really good
18:22:17 <squaremelon> best beers are in estonia -- saku on ice is what i drank today, a le coq premium is the foreigners favorite and mine 4.7vol 5.2 on ships is not that good
18:22:23 <squaremelon> but urquell is also good
18:22:45 <squaremelon> that isn't estonian
18:22:49 <johnjay> so modern code would be way faster if people were smarter and used less bloat?
18:22:58 <geist> johnjay: not necessarily
18:23:11 <geist> bloat often equals features that people want
18:23:25 <geist> they're just impractical features to have implemented 10-20 years ago
18:23:42 <geist> sometimes it's useless crap, sometimes it's stuff people like, not all bloat is bad
18:24:53 <graphitemaster> CPUs tend to go in the direction of what patterns are used a lot in code. That's something to consider. Like x86's lea instruction is literally designed to make accessing fields of a struct sane. The removal of a lot of parallel bitwise hardware is because code doesn't really use bitwise stuff as much, etc.
18:25:21 <graphitemaster> So it's not just people being smarter and using less bloat it's also the hardware staying aligned with what the software trends are.
18:25:44 <j`ey> does qemu-aarch64 support botting directly into a kernel too?
18:25:46 <geist> though lea is not a x86 unique instruction. most cisc machines i've looked at with complex addressing modes have some sort of equivalent
18:25:54 <j`ey> (too, as in, I think qemu-x86 does)
18:25:56 <graphitemaster> Sometimes software dictates hardware, sometimes hardware dictates software.
18:25:58 <geist> j`ey: yeah
18:26:02 <johnjay> geist: what features could we have instead if we didn't have hit tab and run a million python and java scripts
18:26:11 <j`ey> geist: and it passes a device tree?
18:26:16 <geist> j`ey: yep
18:26:21 <j`ey> neat
18:26:30 <squaremelon> memory those days use buffered loads on tomasulo sw based derivatives, this is how most out-of-order cores run tomasulo
18:26:33 <geist> remember -kernel on qmeu is basically explicitly designed to boot linux
18:26:39 <geist> so it tends to be linux centric, whatever it wants
18:26:40 <squaremelon> on intel those are done with hw transactional memory
18:26:53 <j`ey> software dictates hardware..like aarch64's FJCVTZS
18:26:58 <j`ey> "Floating-point Javascript Convert to Signed fixed-point, rounding toward Zero.
18:27:10 <j`ey> geist: I see
18:27:19 <squaremelon> you have to make a recursive procedure to pile up loads that the rob head is not removed
18:27:20 <johnjay> ... the instruction name has Javascript in the name?
18:27:28 <graphitemaster> What was that ARM specific extension meant for running Java bytecode called. Made for Android
18:27:37 <graphitemaster> That's another example of software dictating hardware.
18:27:54 <johnjay> that looks like insanity to me
18:28:01 <graphitemaster> Jazelle
18:28:10 <johnjay> but i guess if it's what companies demand...
18:28:26 <squaremelon> on ARM axi and amba interconnect and load store queues manage the buffered loads
18:28:31 <graphitemaster> Anyways ThumbEE is kind of similar
18:29:00 <graphitemaster> Honestly. At this point ARM is a CISC
18:29:17 <squaremelon> because on out-of-order core only a rob head can commit to architectural state
18:29:27 <graphitemaster> You can't just keep saying it's a RISC because you have a lot of registers and make everything an extension that is optional.
18:29:27 <squaremelon> other instructions are issued but not committed
18:29:47 <squaremelon> basically that means they commit to microarchitectural state
18:30:25 <squaremelon> head of the rob can access both though
18:30:27 <johnjay> honestly it would be awesome to write a better apt-get
18:30:33 <squaremelon> micro and usual arch state
18:30:36 <johnjay> it always takes a long time for me to update things even with an SSD drive
18:31:10 <graphitemaster> 32-bit ARM is very CISC like. It has variable length instructions, instructions that read and write _multiple_ registers, a variety of odd ones that have few uses (Neon), they all break up into a variable number of operations
18:31:14 <graphitemaster> That's literally CISC
18:31:27 <graphitemaster> I think aarch64 cleaned up a lot of it but left a lot of it also CISC like
18:31:54 <graphitemaster> load/store pair, load/store /w auto increment, arithmetic/logic with shifts, vector ld/st instructions in Neon for strided reads/writes
18:32:02 <j`ey> I wouldnt say its variable sized..
18:32:18 <squaremelon> graphitemaster: yeah so they say, but it's cisc path is not compatible with very old arm chips
18:32:34 <johnjay> does the risc vs cisc distinction still matter?
18:33:04 <johnjay> it sounds like everything is cisc now
18:33:06 <klys> instruction set computing
18:33:11 <squaremelon> johnjay: on long pipeline it matters definitely, cause hw can execute faster optimized code when you give less data
18:33:42 <squaremelon> but it comes with a cost
18:33:53 <squaremelon> mostly in power usage also added
18:34:05 <klys> complete instruction set computing
18:34:06 <johnjay> is that why arm has been lower power than x86?
18:34:49 <graphitemaster> Every CPU is a micro-architecture. At the end of the day the only real difference here is cache coherency and the decoder
18:34:52 <squaremelon> mainly yeah i think so at least
18:35:24 <graphitemaster> Everything breaks up into a ton of micro ops after the decoder stage regardless if it's an ARM, x86, PPC, etc.
18:35:41 <graphitemaster> Like really we're just comparing frontends as far as I'm concerned.
18:37:29 <squaremelon> johnjay: but there is a distinction, whether it is in-order or out-of-order core, the last one consumes moore power
18:37:41 <squaremelon> and finally the VLIW which consumes the least
18:42:17 <squaremelon> and yeah obviously caching matters, fully associative caches and CAMs are also farily power hungry structures in hw
18:43:31 <johnjay> i was just thinking a drawback of modifying the hardware to be more compatible with the software
18:43:39 <johnjay> is it restricts the range of software you can write
18:43:44 <johnjay> almost like converging to a point
18:44:02 <graphitemaster> Well we're kind of seeing that now.
18:44:15 <graphitemaster> Software has historically been written to be single threaded for too long
18:44:34 <graphitemaster> The hardware has kind of been incapable of improving that so they've gone thread heavy
18:44:38 <graphitemaster> THREAD RIPPER
18:44:40 <squaremelon> Yes, for instance genesis 2 kits CGRAs are very low-power and one of my favorites
18:44:51 <johnjay> graphitemaster: wasn't that decades ago though
18:44:51 <graphitemaster> geist, THREAD RIPPER
18:44:59 <squaremelon> i want to for instance write a genesis2 generator for TCE TTA
18:45:16 <johnjay> shouldn't we have a clear idea by now if multithreading is worth it?
18:45:17 <graphitemaster> Yeah but software takes ages to catch up and we still have no decent threading model that is easy to program with
18:45:55 <graphitemaster> The fact of the matter is that mutability is just a nightmare when you have threads. That's just the honest to gosh fact.
18:46:19 <squaremelon> software developers are bit insecurely dumb at times, there is an issue that everyone wants to be an sw developer those too who can not even think properly
18:46:28 <squaremelon> and there is a lot to coordinate
18:46:38 <graphitemaster> I think in order for us to really have a clear understanding of if threads are really worth it aside from very specific scenarios that benefit because they map well to threads we need faster memory
18:47:00 <graphitemaster> I don't just mean faster as in a little I mean we need to be able to copy around data structures without worrying about memory bandwidth
18:47:22 <graphitemaster> That's the only way we can truly explore immutable styles of programming
18:47:50 <johnjay> "immutable styles" = threading?
18:47:55 <squaremelon> in the end threads map to different cores, manycore would have to be written correctly to spawn them well
18:47:55 <graphitemaster> immutable data
18:48:15 <johnjay> by the way i'm thinking about diving into arm assembly. it's either that x86 or ppc and arm seems to be the most popular now
18:48:28 <squaremelon> yeah it is absolutely possible, but i have no new AMD cores available at home
18:48:31 <graphitemaster> Basically if everything was copy on write, data race issues go away and the only problem left to solve is ABA
18:48:41 <graphitemaster> Which can be done with monotonically increasing counters
18:48:50 <graphitemaster> It's not even difficult to teach either.
18:48:53 <squaremelon> i have not researched how well THREAD RIPPER was done, i only have heard it is pretty good
18:48:58 <graphitemaster> Compared to the shit show that is locks and atomics anyways.
18:49:27 <graphitemaster> Thing is, COW is slow. It's slow because munging that much data is slow
18:49:34 <geist> THREAD RIPPER
18:51:27 <graphitemaster> AMD's multi-die solution to CPUs and GPUs with a fast interconnect is going to allow them to experiment with putting actual memory on die
18:51:38 <graphitemaster> I don't mean like CPU caches here
18:51:44 <geist> MEMRIPPER
18:51:45 <graphitemaster> I mean like full on RAM, think HBM
18:51:57 <graphitemaster> They already filed patents for it
18:52:41 <graphitemaster> http://www.freepatentsonline.com/20190196742.pdf
18:52:55 <geist> NO PATENTS OMERGLEGLEGEFFEG
18:52:56 <graphitemaster> https://images-ext-1.discordapp.net/external/f8x9pbYKwQc0elRe9RuMA8-eiBZ7JyKuar_NQGsSZUM/https/pbs.twimg.com/media/EAiVC-DWkAANOgG.png%3Alarge
18:53:26 <squaremelon> johnjay: there are baiscally two ways to run faster code, on single core with pipelining and enough ALUs and as parallel threads on separate cores
18:54:03 <graphitemaster> geist, too late, you've been tainted
18:54:15 <geist> RIP AND TEAR PATENTS
18:54:17 <johnjay> ah this article is beyond my current level
18:54:19 <johnjay> https://en.wikipedia.org/wiki/Readers%E2%80%93writers_problem
18:54:50 <johnjay> honestly the only resource i can think of for getting into concurrency is the pthread book
18:54:56 <johnjay> idk if there's a better one
18:55:02 <squaremelon> johnjay: that is for long pipeline mode, but for short mode a tomasulo is handy, this should be async start of the pipeline skipping also most the decoding
18:55:02 <geist> probably a place to start
18:55:19 <graphitemaster> Honestly. I've reworked my understanding of threading from the framework of a future and a promise
18:55:25 <graphitemaster> I think it's a better mental model than locks
18:55:34 <graphitemaster> (and condition variables)
18:55:47 <johnjay> you know, while we still have to deal with the atomic lock "shitshow"
18:55:48 <squaremelon> in the end how all works, is how you write software for particular hardware
18:56:51 <squaremelon> hardware provides those instructions, however there is also lockless paradigm
18:57:02 <squaremelon> giving every core it's own memory
18:57:38 <geist> speaking of shitshow you shoulda seen the porta potty i used earlier today
18:57:40 <geist> eeeeeyooooo
18:57:45 <graphitemaster> You have a task, the task produces a result, that result is your data. A future is basically a boxed representation. It's something that says "in the future I will contain something", you explicitly unbox it to have it wait for something. On the other end you have a promise (derived from a future) the point of the promise is to "send" the result to the future, it implies that "I promise to send this result in the future"
18:58:15 <graphitemaster> I think that mental model is really powerful
18:58:27 <geist> graphitemaster: doesn't that start to get unweidly for situations where you need ongoing worker threads and whatnot?
18:58:39 <geist> i guess not, if you consider the work to be a series of promises
18:58:52 <geist> but that still implies that one of the threads is sort of the master
18:59:08 <geist> or it's all a heirarchy of promises
18:59:21 <johnjay> graphitemaster: if you can recommend a good place to start with that i'm all ears
18:59:29 <geist> 'i promise to be a program' which then splits things down into smaller and smaller promises
18:59:46 <graphitemaster> geist, not really, you can produce results as a queue<future<T>> right, so as you worker threads are churning away they're filling the queue with results, some consumer of the queue either blocks on one or doesn't based on what is done, we also have multi-wait that waits on many at once and wakes up when just one of them gets the result (think poll)
19:00:08 * geist nods
19:00:24 <graphitemaster> Thing is there is a lot of overhead here compared to fine grained locking and what not
19:00:37 <graphitemaster> Each future is basically a tuple of (mutex + condition variable + data)
19:00:59 <geist> i've certainly heard folks go on about that model at work
19:00:59 <graphitemaster> oh and a boolean I guess indicating if it's been set
19:01:10 <squaremelon> well well sortie explained this part well back times, what did the CAS stand for?
19:01:14 <graphitemaster> I really like it to be honest. It's really difficult to fuck things up.
19:01:15 <squaremelon> maybe copy and set right?
19:01:23 <geist> compare and set probably
19:01:25 <sortie> Hello I am sortie.
19:01:32 <geist> a wild sortie appears
19:01:41 <graphitemaster> That being said, it's not something you want in the middle of a critical loop though. Mutexes are still slow in realtime situations.
19:01:47 <sortie> Today I set up a systemd unit file for my IRC bots. So wild!
19:01:48 <squaremelon> there are two concepts, reentrancy and thread-safety
19:01:50 <graphitemaster> Lock free data structures still need to exist sadly.
19:02:09 <geist> as well as porta-potties
19:02:27 <squaremelon> the difference relies on one being interrupt safe and other not
19:02:31 <zid> The two universal trues, lock free data structures and porta-potties need to exist
19:02:45 <geist> Everybody Poops
19:02:52 <graphitemaster> You don't want a lock free porta-potty though
19:02:55 <squaremelon> but both concepts rely on data consistency and possible dead lock if locks are done incorrectly
19:02:55 <graphitemaster> Privacy is paramount.
19:03:02 <geist> graphitemaster: niice
19:03:28 <graphitemaster> Yall set me up for it
19:03:39 <geist> nice return
19:05:27 <graphitemaster> You could certainly paint a vivid picture explaining data races and lock-free as a bunch of people standing in line to use a porta-potty with no locks on them.
19:05:39 <graphitemaster> Everyone desperately needing to go number two.
19:05:58 <graphitemaster> It becomes a literal shit show
19:06:22 <graphitemaster> That's actually the true epitome of multithreaded programming
19:06:41 <zid> jiggling the handle is a spinlock?
19:07:54 <graphitemaster> No, smelling the unit while you're first in line to tell if the shit is fresh or not would be a spin lock.
19:08:28 <geist> there's also a barging spinlock
19:08:53 <geist> barging trylock is the drunk guy that just walks up and tries to pry the door open, ignoring the line
19:09:56 <johnjay> at this point i have to confess that when I can't find a portapotty at the beach i just pee in the ocean
19:10:01 <johnjay> hopefully that's not too weird of a thing
19:10:12 <geist> that's a data leak
19:10:24 <johnjay> hah
19:10:39 <graphitemaster> tainted sink
19:10:50 <squaremelon> so one spanient added a scheduling problem slides on both single and multi-gpu systems too, if there is someone familiar with matrices for instance, then he made it via cholesky factorization function by blocks version of that, it is hard to explain this to someone who can not understand the subject well though
19:10:53 <geist> on a weakly ordered beach you can't predict when anyone will feel the warm pee
19:11:13 <squaremelon> but his slides were fully understandable to me, even though i use slightly different mechanism there
19:11:49 <squaremelon> err algorithms by blocks heck
19:17:12 <graphitemaster> So multithreaded bootloaders when?
19:17:22 <graphitemaster> I need to be running EL TORITO on all my THREAD RIPPER threads
19:21:42 <squaremelon> basically the queues on gpu can not be run with branches at all, since branches behave differently on gpu based queues, they are manipulated as wavefront id rows and columns
19:23:24 <squaremelon> there are two identifiers for each row and column one is a fetch wavefront or also called decode wavefront ID
19:23:40 <squaremelon> and another one is called SIMD arbiters wfid
19:24:17 <squaremelon> when those are equal a column change is done in the queues
19:24:42 <squaremelon> those can go equal only if two instructions are issued in real program order
19:24:48 <squaremelon> wherever on the wfid line
19:27:25 <squaremelon> the line or so called row on GCN is 40wfid entries, and the column also 40 entries
19:27:51 <squaremelon> so if you schedule back to back 5and 6 entry in the line or row
19:28:07 <squaremelon> it will change to 6th column and continue to put instructions to there
19:28:48 <squaremelon> a lot of manipulation needs to be done so that all is deterministic, it is very much more complex than on cpus but can be reasonably easily still done
19:29:18 <squaremelon> both of the arbiters wrap around also
19:31:58 <squaremelon> so since scoreboard takes a lot of resources in hw, actually VLIW queues are wider 5way vliw has 2560 queue entries so it is more powerful than GCN
19:32:31 <squaremelon> gcn has 1600 per compute unit, recent intel CPU core queues are around 400 entries
19:36:13 <squaremelon> for instance NAVI is entirely insane arch. VLIW now again, but single CU out of 20 has 10240 queue entries post-decode like this
19:41:27 <john_cephalopoda> Hey
19:43:53 <john_cephalopoda> I am running a multiboot image in qemu. My stacks are located at 0xea60 and go on for 3000 byte. When I move my stacks to 0x7E00 instead, the OS still runs, but way slower. Any ideas what might cause this?
19:44:41 <zid> beause the pages overlap with your code?
19:45:07 <zid> and so it's constantly trapping writes to make sure you didn't polymorphise your code, maybe.
19:47:25 <squaremelon> some people wrote very good netlist IR based tools, which make the verilog more readable or can be flattened also to simulator based C++, it was somewhat nervy to test all, since the chips full output is pretty huge
19:47:45 <squaremelon> i came across some few millions of lines that i processed
19:48:03 <john_cephalopoda> Indeed. When I move my OS image up to 6M, there is no issue with the speed any more.
19:49:17 <squaremelon> it is not always easy to be a golden man, violations are always following and the work is also hard at times , which most others can not manage to do
19:54:15 <squaremelon> I did most the heavy scoring earlier in sports, and the more i did so the more i became violated, myself never even having done much wrong, you do not have to do anything wrong for nutters to prescribe you estonian histories poisonessed and deadliest additive combination in forced stay, like hitler attacked lots of europeans without any of them having done much bad
19:54:27 <squaremelon> he was insane, like most estonians who have violated me
19:56:01 * j`ey is very confused
20:01:13 <squaremelon> what i am saying i have 300complaints alone to mental insitution when i open my e-files to start the case against them, not to mention those nutters who picket daily basis, it is one of the biggest and cruelest scams i have ever met, and unfortunently victim is me
20:13:32 <squaremelon> what i am saying i am sick of those deadly scammers, it must be one of the most hopelessly nastiest terror conspiring case, everything is faked and framed by even authorities, i do not want to see a single person anymore and want to purchase a place where i can live alone, bye
20:14:00 <j`ey> uh
20:17:07 <john_cephalopoda> zid: Thanks again for the tip with the paging. If I understood it correctly, putting my kernel into a different 4k of memory will solve that issue?
20:19:08 <john_cephalopoda> Or should it be exactly 4k away to ensure that it is in an entirely different page in any case?
20:20:33 <zid> john_cephalopoda: probably