Search logs:

channel logs for 2004 - 2010 are archived at ·· can't be searched

#osdev2 = #osdev @ Libera from 23may2021 to present

#osdev @ OPN/FreeNode from 3apr2001 to 23may2021

all other channels are on OPN/FreeNode from 2004 to present

Friday, 7 January 2022

04:55:00 <Jari--> morning * /w
05:09:00 <Jari--> sdfgsdfg: morning from Finland, EU
05:09:00 <sdfgsdfg> afternoon from australia
05:09:00 <Jari--> sdfgsdfg: is Australia's timezone same as Japan's ?
05:09:00 <Jari--> approximating it is
05:09:00 <sdfgsdfg> probably
05:10:00 <sdfgsdfg> gmt+8
05:10:00 <Jari--> got a girlfriend in Japan
05:10:00 <sdfgsdfg> whats she doing there, tell her to come to finland
05:10:00 <Affliction> East coast is +10/11, west coast is +8
05:10:00 <Jari--> sdfgsdfg: remote love
05:10:00 <Affliction> the central states have weird :30 timezones
05:10:00 <sdfgsdfg> lol my bad, yes east coast is +10, melbourne here
05:11:00 <sdfgsdfg> I think +11 is queensland
05:11:00 <Affliction> other way around, southern states have DST, we don't
05:11:00 <Mutabah> +11 for daylight savings
05:11:00 <sdfgsdfg> they dont like the timezone changes
05:11:00 <Affliction> (I'm in Brisbane)
05:11:00 <sdfgsdfg> savings thing
05:11:00 <sdfgsdfg> after 9 yrs in qld I could call myself a qlder too :P
05:12:00 <Affliction> born here, probably die here heh
05:12:00 <Affliction> Haven't been much further south than Canberra
05:12:00 <sdfgsdfg> why don't you try GC or SC
05:13:00 <sdfgsdfg> or byron over the summer, can you travel interstate?
05:13:00 <Affliction> Still requires testing ahead of time I think
08:28:00 <sham1> DST is pain
09:21:00 <klange> I like not having it, but not having it does not mean I am free from its pain as I still have to deal with places that have it.
09:23:00 <clever> i have daily meetings at noon, except when DST decides to randomly move it to either 11am or 1pm, lol
09:23:00 <clever> both me and the meeting are moving
09:23:00 <clever> and of course, we arent synchronized!
09:28:00 <sham1> I shouldn't suffer from 1 hour jetlag semianually
19:10:00 <pie_> Hi, is there something like osdev but for custom CPU architectures?
19:11:00 <GeDaMo> There's #cpudev but only 5 nicks
19:11:00 <GeDaMo> ##fpga maybe
19:11:00 <pie_> well, that's a start, thanks!
19:11:00 <pie_> if anyone thinks of anything else, please tell
19:12:00 <GeDaMo> ##homebrewcpu
20:41:00 <sortie> pie_, certainly if there isn't, someone should build that community :)
20:45:00 <geist> yah
20:46:00 <geist> TIL that Coherent OS was not based on existing unix or bsd sources
20:46:00 <geist> i always thought it was a variant, but apparently it was a from scratch implementation of unix at the time (mid 80s)
20:53:00 <moon-child> pie_:
20:53:00 <bslsk05> ​ Home :: OpenCores
20:54:00 <GeDaMo>
20:54:00 <bslsk05> ​ Homebrew Computers Web-Ring
20:55:00 <GeDaMo> pie_: also, have you seen Ben Eater's breadboard CPU videos?
20:56:00 <GeDaMo> Ben Eater - Building an 8-bit breadboard computer!
20:56:00 <bslsk05> ​playlist 'Building an 8-bit breadboard computer!' by Ben Eater
20:56:00 <GeDaMo>
20:56:00 <bslsk05> ​playlist 'Making an 8 Bit pipelined CPU' by James Sharman
21:03:00 <geist> also looks like is running today
21:03:00 <geist> he's had that thing alive for i thnk 15 years now
21:33:00 <gorgonical> I agree with Bill here: somehow FPGAs dont feel like the same thing
21:33:00 <gorgonical> Like it's cheating or something
21:37:00 <gog> how to program fpga to be perfect girlfriend
21:37:00 <gog> not cheating
21:37:00 <gog> :|
21:38:00 <kazinsal> field-programmable waifu
21:38:00 * gog pets kazinsal
21:38:00 * kazinsal nyaas unexpectedly
21:39:00 <kazinsal> uh oh. I've become a catboy
21:39:00 <gog> :3
21:40:00 <gorgonical> so how does an FPGA actually work? like, are you just ultimately providing the truth-table values for each gate?
21:40:00 <zid> they don't want you to know
21:41:00 <gorgonical> Is that what's happening? You program the lookup tables and the block references that table?
21:41:00 <zid> but they're basically a giant shift register and you clock it all in, and yea, the bits determine if it's an and/or/xor etc
21:41:00 <zid> in a big grid
21:41:00 <gorgonical> So surely there's trade-offs between having per-block tables vs a big central table or something
21:42:00 <kazinsal> basically each logic block has a 4-input LUT, an adder, and a flop-flop
21:43:00 <gorgonical> And the purpose of the block may not use all of these components
21:43:00 <gorgonical> But the general idea being any gate configuration you want you can achieve with these pieces
21:43:00 <kazinsal> yeah FPGA optimization is magic
21:44:00 <gog> i'd really like to play with one
21:44:00 <gog> i should look into some mini dev boards or something
21:44:00 <gorgonical> So probably there's a lot more happening under the hood. Cause the blocks have to be routed, etc. So assumedly there's almost no FPGA where you can take gate logic and just apply it?
21:44:00 <gorgonical> As in a "really see what's happening" approach?
21:45:00 <gorgonical> My understanding is that you take something like verilog and push it through various tools that transform it into the FPGA magic configuration you need, which will not resemble in any way the verilog you put in
21:46:00 <gog> i have no idea how any of it works lol
21:46:00 <gog> i just know i want one as a toy
21:48:00 <zid> I just want a microcontroller and an spi thingy
21:55:00 <bauen1> zid: a digispark maybe ? it's an attiny85 with a very hackish usb port and enough free wires for spi :D
22:04:00 <zid> by microcontroller I basically mean 'controller', not that micro :P
22:56:00 <pie_> i patiently await your lecture on monday :3 <gorgonical> so how does an FPGA actually work? like, are you just ultimately providing the truth-table values for each gate?
22:56:00 <geist> gorgonical: basically it's a series of LUTs yes
22:57:00 <geist> may be a huge array of say 5 in 2 out LUTs with a lot of interconnecting traces
22:57:00 <pie_> i think that might be CPLDs but Im not sure<gorgonical> So probably there's a lot more happening under the hood. Cause the blocks have to be routed, etc. So assumedly there's almost no FPGA where you can take gate logic and just apply it?
22:57:00 <geist> also each LUT may have some additional features like a 1 or 2 bit latch, or a dedicated add circuit, or an inverter on every input/output
22:58:00 <geist> plus some dedicated SRAM blocks spread around the FPGA, some PLLs and some pin drivers
22:58:00 <geist> but the bulk of the lifting are the LUTs
22:59:00 <geist> what's fascinating is to look at what the fpga compiler comes up with
22:59:00 <geist> you can usually get it to visualize how it decided to flatten your logic
22:59:00 <pie_> various things mentioned here and in the boxes at the bottom
22:59:00 <bslsk05> ​ Complex programmable logic device - Wikipedia
23:00:00 <geist> yah CPLDs and FPGAS are pretty similar nowadays
23:00:00 <geist> they used to have more of a difference, but now it's kinda like cpu vs microcontroller. similar things, largely scale and how they're used
23:01:00 <geist> my experience is modern CPLDs are usually smaller, lower power, and have built in flash so you can program them and they stay that way
23:01:00 <geist> FPGAs usually have an external flash chip and reload their configuration on powerup
23:01:00 <geist> but are usually bigger
23:01:00 <geist> (more LUTs)
23:03:00 <clever> would a CPLD just have an internal flash array, and still load the config, or is it more that the flash is spread over the whole chip, and each config element IS a flash cell?
23:03:00 <geist> good question. i'm guessing the former?
23:03:00 <geist> but could be the latter. it's my udnerstanding that the config that an fpga loads is largely sram cells spread all over the luts
23:04:00 <geist> so it could be you could embed the flash or eeprom in the luts themselves. maybe slower, but requires no load time
23:04:00 <geist> and thus a cpld is born
23:04:00 <clever> you could probably figure out, by looking at how quick of a "boot time" the datasheet claims
23:04:00 <pie_> there have been some fpga reverse engineering efforts
23:05:00 <pie_> not sure if they really mainly only worked on the bitstreams or if that actually yielded much hardware info
23:05:00 <pie_> maybe the datasheets do say enoguh
23:05:00 <pie_> *enough. - or does anyone have access to TechInsights? :p
23:06:00 <geist> i dunno, both xilinx and altera document their fpgas pretty well
23:06:00 <geist> you can find good descriptions of precisely how the luts work, how they're laid out, etc
23:07:00 <geist> the hard part is figuring out how the bitstream maps to them, but it doesn't look *tremendously* hard. if you look at an uncompressed bitstream it really does look a lot like a gigantic bitmap
23:07:00 <geist> but that being said the lattice ones are well understood, such that there's an open soruce fpga compiler for it
23:07:00 <geist> i think the problem is fpga compilers are ridiculously complicated
23:21:00 <vancz> i would be curious to find out what they do one of these days
23:22:00 <vancz> and what makes the IDEs start at 30gigs or whatever
23:22:00 <vancz> at least for xilinx
23:22:00 <vancz> though im suspicious a lot of that is having like 10 copies of toolchains in them and maybe lots of IP? :p
23:22:00 <vancz> no idea
23:25:00 <vancz> suggests the following:
23:25:00 <vancz> All FPGA tools take a massive amount of space because each supported part needs it’s own model, that specifies not only features bitstream format etc but also all the detailed timing info to run routing, synthesis etc. Thus the higher end the supported parts are the bigger the models, and you have probably hundreds of them (a model might be “Only” Few hundreds megs of data, however you).
23:25:00 <bslsk05> ​ WTF Xilinx - Page 1
23:27:00 <clever> vancz: was talking about that timing stuff over in #cpudev, how the tooling basically needs to compute the entire propogating time from the input flipflops to the output flipflops, and then compute what max freq the design can handle
23:28:00 <clever> if the clock is over that number, the signal wont have time to propogate thru every gate, and cross whatever whacky distances the router picked
23:29:00 <vancz> That makes sense.
23:29:00 <clever> and then you may need to modify your design to pipeline things, so it does less work in a given clock cycle
23:29:00 <clever> if you split the job into 2 halves, then the propogation time is halved, so you can run at twice the freq
23:29:00 <clever> if there are no other bottlenecks
23:29:00 <clever> but now it takes 2 clock cycles to do the job
23:32:00 <clever> with an asic, your not limited by how the fpga laid out its gates and LUTS, so you can make things more compact
23:32:00 <clever> but you still have other issues
23:32:00 <clever> the fab-house will have a set of rules, on how close gates on the silicon can safely be packed, and your router needs to follow those rules
23:33:00 <vancz> You're not limited by how the fpga is laid out, you're limited by how you laid it out :p
23:33:00 <clever> but depending on what resources your using, you may run out of something like blockram in a given area
23:33:00 <clever> so the router has to wonder over to the other half of the chip, and steal some from there
23:34:00 <clever> and now your getting a bonus round-trips to the other side of the chip and back again
23:38:00 <clever> simplest way i can see to cause that, is to just shove all of the fpga block ram into a single array in my verilog
23:38:00 <clever> then the tooling has to generate an addr decoder, that routes things to the right region of the chip, based on the index i used
23:38:00 <clever> and what address i access, changes the access latency
23:39:00 <clever> but to hide that, the tooling has to just take the worst latency possible, and declares that to be the speed limit