Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
--------------B555A475B383CCA24BB5E365 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mark, (snip) > > >> 1) Is the access time / CLOCK to DOUT time for the BRAMS faster when set > >> for a wide data configuration? > >> (ie presumably the output signals can skip 4 levels of multiplexers) > > > >no clue, but the speeds files would tell you (ie timing report) > > > Yes but only one Tbcko is reported by the software. > I was interested in the timing of the hardware. > I don't believe that there aren't any differences between > either the clock to data for all data lines for all configurations, > or the address to clock setup for all address lines for all configurations. > I could be wrong, should the BRAMS contain a large number of FFs > to synchronise all lines. > The matter might be irrelevant, such as only a few pS difference, which > would get swamped by the net delays. > The net delays might be INERTIAL, in which case the swamping would be more. There is no difference. You may believe what you wish. > >> > >> 2) is the carry out pin of a CLB in a LOW/HIGH/TRI STATE if not selected to > >> be used in a 1 CLB hard macro? > > > >all logic is on, or off. even the tbufs are not tristate > > > Makes sense, obviously bad for CMOS lines to be floating within high speed > logic. > So I can initialise a carry chain even when the BX input is being used for > something else? Don't know. > >> > >> 3) whats the point of the CIN pins in the lowest row of slices? > >> (ie with highest row index) > > > >No idea. I do know that signals that don't go anywhere are tied off by the > >special term cells that go all around the clb array > > > I never understood about the tie-points for the 4000 series. > Neither did the software. These are special term cells which complete the routing whenever there is a discontinuity in the fabric. Has nothing to do with programming, or the software. > >> > >> 4) if the answer to the 3rd question is NONE then you might be amused > >> to try smart-placing (within FPGA_EDITOR) a hard macro which uses a > >> CIN (seems to like the lowest row for some reason!) > > > >so what happens? > > > I stopped using "smart-placing". > > > >> 5) The Spartan2 databook is very unclear about IO banks & PQ packages. > >> Presumably the Vref & Vccio arrangements are as with the Virtex? > >> (which was at least clearly documented) > > > Sorry , unclear question. What I should have written would be: > I understand that within the Virtex devices there are 8 I/O Banks, > each potentially with their own Vccio & Vref lines. > Within some packages the Vccio lines for different banks are bonded together . > The exteme case is with the HQ/PQ packages, with just 1 Vccio line, > but still with independent Vrefs. > I guess that the same applies for the Spartan2, but if I hadn't seen the > Virtex documentation I might not even have made that guess. > Am I correct in assuming independent Vrefs for the Spartan2 PQ > packages? Need to check the datasheet. Packages in Spartan usually have more IO's, and hence fewer options, and less robust ground bounce, and tighter SSO rules. > >spartan is always more io bonded out, with fewer vcc's and gnds. if you want > >performance, then you buy Virtex. If you want low cost, you buy Spartan II. > > > > I want high processing performance and (generally) low I/O data rates. > Hardly any of the OBUFS will be set to FAST, so I won't be ground bouncing. > Spartan2-6 is the most cost effective solution. > (better performance than Virtex-5 ) Note correct spelling of Virtex (tm) > > >> > >> 6) Any chance of filling in some of the blank fields for Spartan2 -6 > >> timing? > > > >don't know. ask Robert Wells. > > > Who is Robert Wells? is he accessible via robert.wells@xilinx.com ? I answered this to an internal person, who should never have sent it outside. They should have asked Robert and gotten back to you. I apologize to Robert. I will ask someone in the Spartan II program, and get back to you. > >> > >> 7) How about guaranteed best case timings as well ? > >> ( specifically for BRAM access) > > > >some day > > > Sorry It's just that I could have sworn that Xilinx indicated the availability > of this data several months ago.The indication was in tabular form. Maybe for Virtex, Virtex E, but I doubt it for Spartan II. I'll ask. > >> > >> 8) Would it be possible to sacrifice a BRAM & use a DLL to lock onto the > >> Tbcko , or would the delay of other BRAMS not track? (even though being > >> guaranteed to have the same Vccint & presumably approximately the same > >> temperature?) (bearing in mind question 1, I would only be applying this to > >> identical > >> data width BRAMS) If this approach is possible which data bit should I route > >> through? > > > >BRAM and DLL are completely different animals, and I doubt they would track > >with process/voltage/temperature > > > I meant having a clock from the output of a 2xDLL fed by the output of a BRAM > whose sole duty is to toggle 1 output bit per input clock cycle. > I.e. sanitising a dirty source in a highly controlled manner. > The delay between the two clocks would depend on Tbcko. The BRAM is synchronous, so where does its clock come from? > > > >> 9) Do the industrial versions draw more startup current than the commercial > >> devices at any given temperature, or is it only at extremely low > >> temperatures (below the commercial range) that the current increases? > > > >just below 0C > > OK, so this should never be a reason to NOT select an Industrial part.. > > > > >> > >> 10) Using XST VHDL (SP8, on both WebPACK & ISE3.3i) RLOCS are lost on > >>all but > >> the first instance of a component containing RLOCS. This is BAD , especially > >> when Carry logic and / or MUXF5 is involved, because what creeps into the > >> wrong CLB makes no sense at all (as regards timing/space) . I originally > >> thought this was a problem with MAP, but then examined the EDN file & > >>found > >> that MAP hadn't been told to cluster components within a CLB. (for any except > >> the first instance) > >> "Preserve Heirarchy" had been SET (as a synthesis option). Presumably the > >>only way around this is to use "Incremental Synthesis"? (Apart from hacking > >> the edn file to ensure all instances refer to the first cell definition,which > >> does indeed work, but surely can't be considered a standard flow!) but > >> incremental synthesis doesn't work for any OTHER component! I haven't tried > >> using the XILFILE attribute which could be a way to enforce the correct > >> hierarchy. > > > >This is a software question, right? I don't do software. > > > Sorry. I thought it was a bug report ! That is handled by opening a webcase over the web. > >> > >> 11)The P&R tools seem to consider (single port) RAM address lines to be > >> unswappable. Is there any way around this, because almost invariably I > >> find that > >> swapping (ie deleting net pins then adding net pins) the two highest > >> net delay address inputs improves the timing on BOTHlines. > > > >There may be a dedicated route reason why this isn't allowed. I just don't > >know. > > > It is allowed, but the tools don't do it themselves. > I'll try pretending to MAP that the RAMs are LUTs, then patch the > file after it has been routed. > The P&R is very good about swapping LUT inputs to improve timing. > The difficulty will be in ensuring all placement info gets through the tool > chain. > I'd pity someone targeting the xc2v10000 & using XDL. > > >> > >> 12) Any chance of a job at Xilinx? (I can write tools. I can design hardware. > >> I like Xilinx hardware.) I'm afraid that jobs locally to me are of the sort > >> "You've got to use ACTEL" (probably something to do with the name of their > >> main product line.) or "You've got to use Virtex" ( note not Virtex-E, > >> Spartan2 or Virtex2 but Virtex). or "You've got to use ALTERA" or "You've > >> got to have marketing experience". I can supply the names of the guilty > >> parties, but I don't think that would help. > > > >There is always a chance. Got a resume? I can't say that it is a good chance, > >or if it is a bad chance, but there is a chance, yes. > > > Ok, I'll just delete the section in which I indicate my bitterness as regards > some versions of Xilinx software! > Don't do that! Why would we want to hire someone who does not honestly represent their views. Austin --------------B555A475B383CCA24BB5E365 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> Mark, <p>(snip) <blockquote TYPE=CITE> <br>>> 1) Is the access time / CLOCK to DOUT time for the BRAMS faster when set <br>>> for a wide data configuration? <br>>> (ie presumably the output signals can skip 4 levels of multiplexers) <br>> <br>>no clue, but the speeds files would tell you (ie timing report) <br>> <br>Yes but only one Tbcko is reported by the software. <br>I was interested in the timing of the hardware. <br>I don't believe that there aren't any differences between <br>either the clock to data for all data lines for all configurations, <br>or the address to clock setup for all address lines for all configurations. <br>I could be wrong, should the BRAMS contain a large number of FFs <br>to synchronise all lines. <br>The matter might be irrelevant, such as only a few pS difference, which <br>would get swamped by the net delays. <br>The net delays might be INERTIAL, in which case the swamping would be more.</blockquote> <p><br>There is no difference. You may believe what you wish. <blockquote TYPE=CITE>>> <br>>> 2) is the carry out pin of a CLB in a LOW/HIGH/TRI STATE if not selected to <br>>> be used in a 1 CLB hard macro? <br>> <br>>all logic is on, or off. even the tbufs are not tristate <br>> <br>Makes sense, obviously bad for CMOS lines to be floating within high speed <br>logic. <br>So I can initialise a carry chain even when the BX input is being used for <br>something else?</blockquote> Don't know. <blockquote TYPE=CITE>>> <br>>> 3) whats the point of the CIN pins in the lowest row of slices? <br>>> (ie with highest row index) <br>> <br>>No idea. I do know that signals that don't go anywhere are tied off by the <br>>special term cells that go all around the clb array <br>> <br>I never understood about the tie-points for the 4000 series. <br>Neither did the software.</blockquote> These are special term cells which complete the routing whenever there is a discontinuity in the fabric. Has nothing to do with programming, or the software. <blockquote TYPE=CITE>>> <br>>> 4) if the answer to the 3rd question is NONE then you might be amused <br>>> to try smart-placing (within FPGA_EDITOR) a hard macro which uses a <br>>> CIN (seems to like the lowest row for some reason!) <br>> <br>>so what happens? <br>> <br>I stopped using "smart-placing". <br>> <br>>> 5) The Spartan2 databook is very unclear about IO banks & PQ packages. <br>>> Presumably the Vref & Vccio arrangements are as with the Virtex? <br>>> (which was at least clearly documented) <br>> <br>Sorry , unclear question. What I should have written would be: <br>I understand that within the Virtex devices there are 8 I/O Banks, <br>each potentially with their own Vccio & Vref lines. <br>Within some packages the Vccio lines for different banks are bonded together . <br>The exteme case is with the HQ/PQ packages, with just 1 Vccio line, <br>but still with independent Vrefs. <br>I guess that the same applies for the Spartan2, but if I hadn't seen the <br>Virtex documentation I might not even have made that guess. <br>Am I correct in assuming independent Vrefs for the Spartan2 PQ <br>packages?</blockquote> Need to check the datasheet. Packages in Spartan usually have more IO's, and hence fewer options, and less robust ground bounce, and tighter SSO rules. <blockquote TYPE=CITE>>spartan is always more io bonded out, with fewer vcc's and gnds. if you want <br>>performance, then you buy Virtex. If you want low cost, you buy Spartan II. <br>> <p>I want high processing performance and (generally) low I/O data rates. <br>Hardly any of the OBUFS will be set to FAST, so I won't be ground bouncing. <br>Spartan2-6 is the most cost effective solution. <br>(better performance than <font color="#FF0000">Virtex</font>-5 ) <font color="#FF0000">Note correct spelling of Virtex (tm)</font> <p>>> <br>>> 6) Any chance of filling in some of the blank fields for Spartan2 -6 <br>>> timing? <br>> <br>>don't know. ask Robert Wells. <br>> <br>Who is Robert Wells? is he accessible via robert.wells@xilinx.com ?</blockquote> I answered this to an internal person, who should never have sent it outside. They should have asked Robert and gotten back to you. I apologize to Robert. I will ask someone in the Spartan II program, and get back to you. <blockquote TYPE=CITE>>> <br>>> 7) How about guaranteed best case timings as well ? <br>>> ( specifically for BRAM access) <br>> <br>>some day <br>> <br>Sorry It's just that I could have sworn that Xilinx indicated the availability <br>of this data several months ago.The indication was in tabular form.</blockquote> Maybe for Virtex, Virtex E, but I doubt it for Spartan II. I'll ask. <blockquote TYPE=CITE>>> <br>>> 8) Would it be possible to sacrifice a BRAM & use a DLL to lock onto the <br>>> Tbcko , or would the delay of other BRAMS not track? (even though being <br>>> guaranteed to have the same Vccint & presumably approximately the same <br>>> temperature?) (bearing in mind question 1, I would only be applying this to <br>>> identical <br>>> data width BRAMS) If this approach is possible which data bit should I route <br>>> through? <br>> <br>>BRAM and DLL are completely different animals, and I doubt they would track <br>>with process/voltage/temperature <br>> <br>I meant having a clock from the output of a 2xDLL fed by the output of a BRAM <br>whose sole duty is to toggle 1 output bit per input clock cycle. <br>I.e. sanitising a dirty source in a highly controlled manner. <br>The delay between the two clocks would depend on Tbcko.</blockquote> The BRAM is synchronous, so where does its clock come from? <blockquote TYPE=CITE>> <br>>> 9) Do the industrial versions draw more startup current than the commercial <br>>> devices at any given temperature, or is it only at extremely low <br>>> temperatures (below the commercial range) that the current increases? <br>> <br>>just below 0C <p>OK, so this should never be a reason to NOT select an Industrial part.. <p>> <br>>> <br>>> 10) Using XST VHDL (SP8, on both WebPACK & ISE3.3i) RLOCS are lost on <br>>>all but <br>>> the first instance of a component containing RLOCS. This is BAD , especially <br>>> when Carry logic and / or MUXF5 is involved, because what creeps into the <br>>> wrong CLB makes no sense at all (as regards timing/space) . I originally <br>>> thought this was a problem with MAP, but then examined the EDN file & <br>>>found <br>>> that MAP hadn't been told to cluster components within a CLB. (for any except <br>>> the first instance) <br>>> "Preserve Heirarchy" had been SET (as a synthesis option). Presumably the <br>>>only way around this is to use "Incremental Synthesis"? (Apart from hacking <br>>> the edn file to ensure all instances refer to the first cell definition,which <br>>> does indeed work, but surely can't be considered a standard flow!) but <br>>> incremental synthesis doesn't work for any OTHER component! I haven't tried <br>>> using the XILFILE attribute which could be a way to enforce the correct <br>>> hierarchy. <br>> <br>>This is a software question, right? I don't do software. <br>> <br>Sorry. I thought it was a bug report !</blockquote> That is handled by opening a webcase over the web. <blockquote TYPE=CITE>>> <br>>> 11)The P&R tools seem to consider (single port) RAM address lines to be <br>>> unswappable. Is there any way around this, because almost invariably I <br>>> find that <br>>> swapping (ie deleting net pins then adding net pins) the two highest <br>>> net delay address inputs improves the timing on BOTHlines. <br>> <br>>There may be a dedicated route reason why this isn't allowed. I just don't <br>>know. <br>> <br>It is allowed, but the tools don't do it themselves. <br>I'll try pretending to MAP that the RAMs are LUTs, then patch the <br>file after it has been routed. <br>The P&R is very good about swapping LUT inputs to improve timing. <br>The difficulty will be in ensuring all placement info gets through the tool <br>chain. <br>I'd pity someone targeting the xc2v10000 & using XDL. <p>>> <br>>> 12) Any chance of a job at Xilinx? (I can write tools. I can design hardware. <br>>> I like Xilinx hardware.) I'm afraid that jobs locally to me are of the sort <br>>> "You've got to use ACTEL" (probably something to do with the name of their <br>>> main product line.) or "You've got to use Virtex" ( note not Virtex-E, <br>>> Spartan2 or Virtex2 but Virtex). or "You've got to use ALTERA" or "You've <br>>> got to have marketing experience". I can supply the names of the guilty <br>>> parties, but I don't think that would help. <br>> <br>>There is always a chance. Got a resume? I can't say that it is a good chance, <br>>or if it is a bad chance, but there is a chance, yes. <br>> <br>Ok, I'll just delete the section in which I indicate my bitterness as regards <br>some versions of Xilinx software! <br> </blockquote> Don't do that! Why would we want to hire someone who does not honestly represent their views. <p>Austin</html> --------------B555A475B383CCA24BB5E365--Article: 34451
Austin Franklin wrote: PH> I've done schematics with floorplanning. I know better than absolutely no PH> effort. > Yes, but I said "correctly done" ;-) One can do a bad job at anything. Do > you have a library of preplaced and mapped symbols that give you most any > permutation of data path elements you would need (counters, timers, > registers, I/O, muxes etc.)? Unless "etc." includes some fairly unusual items, like a writeback cache, some interesting coders and decoders and such, I don't see how I could, even if I was still using schematics. I have not designed with schematics for years, for the basic reason that schematics take too much effort for more than a small design. Most of the designs I've done have had substantial sections that were unique, in the sense they were very different than anything I've done before or since. > > Yet the majority of designs are done in HDL for > > good reasons: less effort, better results. > > You have no proof of this. That's correct, for I don't think such things can be proven. Do you? > I have no doubt that there are SOME designs that MAY be > implemented quicker in HDL, but I believe that is many many less than people > believe. People tend to assume that their experience is typical. Perhaps we are both falling for that trap, eh? > There are MAJOR political/business/ego reasons that there is a LOT of > press/push for use of HDLs...and none for schematics. There is a lot of > money to be had if you can convince people that HDLs are "the way to > go"...whether it's true or not isn't important. The ASIC world mostly changed over to HDL's quite a while ago. If there was advantage to switching back, couldn't a schematic vendor make a lot of money by promoting the reverse switch? Why would they fail to try for that money? -- Phil HaysArticle: 34452
"Austin Franklin" <austin@darkroo87m.com> wrote in message news:<9lplfe$fu2$1@slb4.atl.mindspring.net>... > > So, if you have an ISA card (legacy or PnP), and want to convert it to > > a PCI card, and assuming that the card doesn't use the I/O or memory > > address IBM defined, then it seems to me that you should implement > > Configuration Address Space (and implement Base Address Register > > inside the Configuration Address Space), so that BIOS or Windows can > > assign I/O or memory address automatically. > > My understanding was he was using dedicated addresses, and as such, can't > let the BIOS or Windows assign them. I do believe there are PCI boards that > hardcode the addresses... > Sure, there are some PCI boards that hardwire the decode addresses, but again, according to Appendix G of PCI Local Bus Specification Revision 2.2 states that only legacy devices (i.e., VGA, IDE) can do that. Actually, the legacy devices like VGA or IDE, and has a class code that indicates such a device. So, if you are not VGA, IDE, or other legacy devices that indicates a legacy class code, you should not hardcode addresses, and if you do so, you are violating the specification. > > Since you are using interrupt, that makes it a requirement to > > implement Configuration Address Space because interrupt handlers of > > PCI devices use Configuration Register 3CH (Interrupt Line) and 3DH > > (Interrupt Pin). > > Not really in PC based PCI implementations. They only use INTA. I believe > there used to be a caveat about that in the spec. I will assume that you have never read the PCI specification. From my understanding of the PCI specification reading the PCI Local Bus Specification Revision 2.2 is that a PCI device that uses interrupt uses Configuration Register 3CH (Interrupt Line) and 3DH (Interrupt Pin). Although I have never dealt with PCI's interrupt, Configuration Register 3CH (Interrupt Line) sounds important since that tells the device driver which IRQ the interrupt is connected to. In PCI based IBM-PC/AT clones, the device driver has to know the IRQ number, so that the device driver can send the EOI (End Of Interrupt) command to the correct 8259A compatible programmable interrupt controller (to the master only, or to the slave and master depending on which INTx# gets assigned to which IRQx). The PCI specification doesn't mention explicitly how to deal with the 8259A programmable interrupt controller, but since the interrupt controller is 8259A compatible (with some extensions), I think even the device driver for PCI devices have to send the EOI command. > > > You will likely have to rewrite your device driver to accommodate the > > I/O or memory address that can get assigned to any location, and also > > IRQ can get assigned to any IRQ (Configuration Register 3CH will hold > > the IRQ number assign by BIOS or Windows). > > If the BIOS allows you to set the interrupt from each PCI slot to a > particular INT, then that is not true. > > I believe he can do exactly what he wants, if he can find a BIOS that allows > setting the INT. Sure, in some BIOSes, the BIOS may let you do that. But the motherboards I use automatically assigns the IRQ to the PCI devices. So, the person who asked this question must accommodate that if he plans to sell the card he is talking about, so that it will work with any PCI based IBM-PC/AT clones. I suppose that if he is building such a card for personal use, I guess it won't matter that much assuming that he can find such BIOS, but it is definitely not desirable. I do realize that the person who asked this question doesn't have access to the source code of the device driver, but if that is the case, I suppose he will have to write his own (I realize that that's not easy). Regards, Kevin Brace (don't respond to me directly, respond within the newsgroup)Article: 34453
kevinbraceusenet@hotmail.com (Kevin Brace) writes: > Sure, there are some PCI boards that hardwire the decode > addresses, but again, according to Appendix G of PCI Local Bus > Specification Revision 2.2 states that only legacy devices (i.e., VGA, > IDE) can do that. > Actually, the legacy devices like VGA or IDE, and has a class code > that indicates such a device. > So, if you are not VGA, IDE, or other legacy devices that indicates a > legacy class code, you should not hardcode addresses, and if you do > so, you are violating the specification. What's wrong with making the PCI POST code display a legacy device, and using the appropriate legacy device class code? The use of port 80 for a POST code has a longer legacy than either VGA or IDE. VGA wasn't introduced until 1987; I don't recall when IDE was introduced.Article: 34454
Hi Rick, I already have control of the dram so know the refresh is ok but will probably modify the code for future design checking. What I found in the verilog code confirmed what I thought, I am already using the burst mode but for a single "word". Now all I have to do is change the mode register to give the required burst length (which is all I really need). Thanks again Dave "Rick Filipkiewicz" <rick@algor.co.uk> wrote in message news:3B86ECD3.CD797820@algor.co.uk... > > > Speedy Zero Two wrote: > > > Thanks Jan & Rick, > > > > I found a verilog model at micron that should help. > > > > Dave > > > > Sorry, I should have mentioned the sim models as well. They're pretty good too, > the only thing they really lack is a refresh period check - or at least the ones > I downloaded a long while back. > > >Article: 34456
"Phil Hays" <spampostmaster@home.com> wrote in message news:3B871C38.9295BFA2@home.com... > Austin Franklin wrote: > > PH> I've done schematics with floorplanning. I know better than absolutely no > PH> effort. > > > Yes, but I said "correctly done" ;-) One can do a bad job at anything. Do > > you have a library of preplaced and mapped symbols that give you most any > > permutation of data path elements you would need (counters, timers, > > registers, I/O, muxes etc.)? > > Unless "etc." includes some fairly unusual items, like a writeback cache, some > interesting coders and decoders and such, I don't see how I could, even if I was > still using schematics. I have not designed with schematics for years, for the > basic reason that schematics take too much effort for more than a small design. I believe you were not using schematics correctly if you were not able to do anything but a small design. Some of the largest/fastest/densist chips in the world, that I know of, are done with schematics. Even our beloved FPGAs are designed with schematics (oh the horror!)! The Xilinx PCI core is done in Viewdraw schematics. Personally, I find reading through a bunch of text quite tedious and time consuming. Most everyone I know, when they first get a block of HDL code in front of them, that they have not seen, draw a block diagram. Why bother if the function of the HDL is so obvious and clear to understand? I believe people just don't know how to use schematics correctly. It's that simple. There never were any substantial libraries available from any major vendor, and I certainly wasn't going to just give away my years and years of libraries to anyone. They couldn't be protected, so anyone could just copy them. I worked with Viewlogic and other vendors for years trying to get this concept into their heads, but they, from a business standpoint, couldn't figure out how to make any money with it to pay for the development. > Most of the designs I've done have had substantial sections that were unique, in > the sense they were very different than anything I've done before or since. > > > > > Yet the majority of designs are done in HDL for > > > good reasons: less effort, better results. > > > > You have no proof of this. > > That's correct, for I don't think such things can be proven. Do you? Then why did you make a claim that you know isn't true? I've done a number of designs in both, and schematics win hands down to finished product every time, except for the simplest/slowest designs. There can be way too much fussing with timing and tools etc. with synthesis. When synthesis works, it works, and that's nice...but there is no guarantee that it will, and if it doesn't, the amount of time it takes to get it to work can be exponential, or even complete failure. > > I have no doubt that there are SOME designs that MAY be > > implemented quicker in HDL, but I believe that is many many less than people > > believe. > > People tend to assume that their experience is typical. Perhaps we are both > falling for that trap, eh? I know my experience isn't typical...but having dozens of clients, I would like to believe I have a very good cross section of their experience. > > There are MAJOR political/business/ego reasons that there is a LOT of > > press/push for use of HDLs...and none for schematics. There is a lot of > > money to be had if you can convince people that HDLs are "the way to > > go"...whether it's true or not isn't important. > > The ASIC world mostly changed over to HDL's quite a while ago. Hum. Well, as I said above, most of the fastest/densist/largest designs I am aware of, are done in schematics. Synthesis is like C is for the X86 architecture...(though C compilers have become very very good these days), simply because the X86 has SO much memory, and runs SO fast these days, a programmer can code the worst most inefficient code they can, and it will probably "work". We are moving away from real engineering to the million monkey theory... Now, this doesn't mean that there aren't some exceptional C programmers, or HDL coders out there, on the contrary. A tool is only as good as its operator. > If there was > advantage to switching back, couldn't a schematic vendor make a lot of money by > promoting the reverse switch? Why would they fail to try for that money? Not really. Schematic packages don't sell for near what synthesis packages "sell" for. Most people who have synthesis also have board schematic packages anyway. Unfortunately, perception is perceived as everything...whether the perception is based on reality or truth...or not.Article: 34457
On Fri, 24 Aug 2001 04:50:01 GMT, "Dave Feustel" <dfeustel1@home.com> wrote: >Why is this spam? > Because it's not about FPGAs and because it's nothing but a commercial product announcement. If everybody trying to sell chips posted in all the technology ng's, the groups would be nothing but spam and would serve nobody. >What's the story with the Max9690? Well, Maxim used to make a great ECL comparator... when they felt like it. They would occasionally lose the recipe and stop production without warning. Once they made my company sign a waiver of liability against device bugs before they would allow us to buy more chips - we were nearly out of business at that point, since promised parts had not been shipped for six months - but refused to tell us what the bugs *were*. They did say that the part was being produced in the millions and was likely to be available in the long term. Then they discontinued the part without warning or provisions for eol buys. The end-of-life notice was simply that back-ordered parts didn't arrive. They told my purchasing people that it was replaced with the 'drop-in equivalent' MAX9691, which it certainly isn't; the 9691 won't work in most of our locations. I like Maxim and their stuff in general, but they were irresponsible stonewallers on this one. The lawyers should diversify from kids' toys and tires, and go after the semi people next. JohnArticle: 34458
Phil Hays wrote: > > This is a reasonable analogy, thanks for bringing it up. "C", FORTRAN and > other "higher level languages" allow people with no detailed understanding of a > computer to write very complex programs. The compilers are very good, and while > an experienced assembly code programmer can probably do slightly better than a > good higher level programmer, unless the program is not complex and/or needs to > fit into a tiny space and/or is very speed critical, the program will probably > be best written in a higher level language for maintainability, for portability > and for minimum cost. A monkey will produce a result that may work, but there > is no guarantee that it will, and if it doesn't, the amount of time it takes to > get it to work can be exponential, or even complete failure. Regardless of the > method chosen. Generally a program is developed to work with say 95% of the features the clients want. Portability and documentation are less important. I see schematics more like assembler programs. They work faster but only if you know the hardware. HDL's tend to be too abstract from real hardware as to be efficient from my very limited study. I don't see HDL's as portable as they claim as FPGA's are all different. Unless you stick to Flip/Flops and simple gates with 4 inputs or less things will not be portable. Ben. PS.TTL had one advantage ,you changed suppliers your design still worked. -- Standard Disclaimer : 97% speculation 2% bad grammar 1% facts. "Pre-historic Cpu's" http://www.jetnet.ab.ca/users/bfranchuk Now with schematics.Article: 34459
How do you do compicated state machines with schematics? The circuit would look quite messy and non-intuitive wouldn't it? Austin Franklin wrote: > > I believe you were not using schematics correctly if you were not able to do > anything but a small design. Some of the largest/fastest/densist chips in > the world, that I know of, are done with schematics. Even our beloved FPGAs > are designed with schematics (oh the horror!)! The Xilinx PCI core is done > in Viewdraw schematics. > > Personally, I find reading through a bunch of text quite tedious and time > consuming. Most everyone I know, when they first get a block of HDL code in > front of them, that they have not seen, draw a block diagram. Why bother if > the function of the HDL is so obvious and clear to understand? > > I believe people just don't know how to use schematics correctly. It's that > simple. There never were any substantial libraries available from any major > vendor, and I certainly wasn't going to just give away my years and years of > libraries to anyone. They couldn't be protected, so anyone could just copy > them. I worked with Viewlogic and other vendors for years trying to get > this concept into their heads, but they, from a business standpoint, > couldn't figure out how to make any money with it to pay for the > development. > -- ___ ___ / /\ / /\ / /__\ Russell Shaw, B.Eng, M.Eng(Research) / /\/\ /__/ / Victoria, Australia, Down-Under /__/\/\/ \ \ / http://home.iprimus.com.au/rjshaw \ \/\/ \__\/ \__\/Article: 34460
"Russell Shaw" <rjshaw@iprimus.com.au> wrote in message news:3B885646.18829572@iprimus.com.au... > How do you do compicated state machines with schematics? The circuit > would look quite messy and non-intuitive wouldn't it? Sometimes I have to do the state machines in schematics, because they don't make timing in synthesis. Sometimes, the state machines make timing in synthesis, so I leave them in synthesis. It really isn't that hard to do state machines in schematics if you draw then well, and document them well right on the schematic. There was a little company, called "Fliptonics", I believe, that had a very nice schematic based state machine library... Perhaps Philip knows something about that company, and this purported state machine library ;-)Article: 34461
Austin Franklin wrote: PH> I have not designed with schematics for years, for the basic reason PH> that schematics take too much effort for more than a small design. > I believe you were not using schematics correctly if you were not able to do > anything but a small design. Small in complexity would have been more accurate. A RAM or a FPGA has lots of transistors and connections, but not much complexity. A design with lots of gates and little complexity is one design where schematics can do well. Another good place for schematics is a design on which lots of effort can be expended for marginal improvements, such as a large volume CPU, which has both lots of gates and lots of complexity. Major CPU design teams are hundreds of designers, some of which may work for a year to slightly optimize a few hundred transistors. > > > > Yet the majority of designs are done in HDL for > > > > good reasons: less effort, better results. > > > > > > You have no proof of this. > > > > That's correct, for I don't think such things can be proven. Do you? > > Then why did you make a claim that you know isn't true? As Godel showed, propositions can be both true and unprovable. http://www.earlham.edu/~peters/courses/logsys/g-proof.htm To prove that "schematics (or HDL) are usually better" would require that all (or all useful) designs be formally identified, and a majority defined that is better in schematics (or HDL). Sorry, but I doubt if the class of designs or of useful designs can be established in any reasonable fashion. I don't care to attempt to prove this statement: but do be my guest. Yet it is true, at least for designs that are both complex and need to run reasonably fast. Write and simulate the HDL first, then spend the effort on speeding up the critical paths to the needed speed. If you have done a reasonable job of the first, the second isn't that hard, and may not be needed. > I've done a number of designs in both, and schematics win hands down to > finished product every time, except for the simplest/slowest designs. Back when synthesis got started, designers were drawing transistors on schematics and wanted a faster way to an acceptable answer. Sure, by calculations and experience the human designer can produce a nearly optimal design for a bit of logic (say for the carry bit of an ALU, perhaps some 20 or 30 transistors, using many hours of human designer time: but a simple synthesis tool can do nearly as well in seconds. By freeing the designer from the tiny details of transistors and gates, much more complex designs could be completed. > There can be way too much > fussing with timing and tools etc. with synthesis. When synthesis works, it > works, and that's nice...but there is no guarantee that it will, and if it > doesn't, the amount of time it takes to get it to work can be exponential, > or even complete failure. When I write code I know fairly closely what the synthesis tool is going to produce, and how fast it will run. This takes experience, but so does "proper use of schematics", correct? And yes, I'm sure that an inexperienced HDL designer can waste a lot of time fussing with timing and tools and such. I don't. I know that an inexperienced schematic designer can waste a lot of time fussing with timing and tools and such. I did that a decade ago. > Synthesis is like C is for the X86 > architecture...(though C compilers have become very very good these days), > simply because the X86 has SO much memory, and runs SO fast these days, a > programmer can code the worst most inefficient code they can, and it will > probably "work". We are moving away from real engineering to the million > monkey theory... Now, this doesn't mean that there aren't some exceptional > C programmers, or HDL coders out there, on the contrary. A tool is only as > good as its operator. This is a reasonable analogy, thanks for bringing it up. "C", FORTRAN and other "higher level languages" allow people with no detailed understanding of a computer to write very complex programs. The compilers are very good, and while an experienced assembly code programmer can probably do slightly better than a good higher level programmer, unless the program is not complex and/or needs to fit into a tiny space and/or is very speed critical, the program will probably be best written in a higher level language for maintainability, for portability and for minimum cost. A monkey will produce a result that may work, but there is no guarantee that it will, and if it doesn't, the amount of time it takes to get it to work can be exponential, or even complete failure. Regardless of the method chosen. -- Phil HaysArticle: 34462
In comp.arch.fpga Dave Feustel <dfeustel1@home.com> wrote: > I also am getting interested in working with Icarus although that is > possibly complicated by my running Windows. I'd rather have a version > I can compile with Visual Studio 6 (either as a project or with a make > file) and run Icarus natively on Windows than use Cygwin (which I > have tried) although running Icarus in a Linux box under VMWare is possible. You shouldn't have to run Icarus from within the Cygwin shell; it's just compiled and linked with Cygwin. Hamish -- Hamish Moffatt VK3SB <hamish@debian.org> <hamish@cloud.net.au>Article: 34463
> You shouldn't have to run Icarus from within the Cygwin shell; > it's just compiled and linked with Cygwin. It's not even compiled/linked with Cygwin. Cygwin is not needed at all to run Icarus Verilog: you need Cygwin at compile time but only for all the tools around the compiler, and not the C++ compiler itself. The magic here is the mingw compiler, a gcc port for native Windows, that is used to compile Icarus Verilog binaries. The sanctioned build process uses the Cygwin shell to gain access to things like bison, flex, bash, gperf, et al, but the resulting binary is native Windows and does not require any special tricks or packages/dlls to run. It is as native as a VC6 compiled binary. (I went through some pains to make sure that Icarus Verilog can be compiled *native* on most any platform with only free tools. It's a matter of pride, by now.) The mingw.txt file included in the iverilog-0.5.tar.gz tarball has step-by-step instructions for compiling Icarus Verilog in this fashion. The only obvious problem with that is that you need tar and gzip to extract said bundle:-P Perhaps I should write some starter tips into the FAQ. Hamish, as the Debian porter, can be forgiven for not knowing this little detail:-) -- Steve Williams "The woods are lovely, dark and deep. steve@icarus.com But I have promises to keep, steve@picturel.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 34464
Hello, Just to get familiar with the basics of verilog and all that I tried to make a 16 bit shift register, and show the contents on a led display. In simulation it does all it's supposed to do. I do my simulations with Icarus Verilog 0.5 (g++-3.0). Now when I compile it to the target hardware, still everything is tip-top. As hardware I'm using a Post-It board, which has an Altera EPM3128ATC100-7 on it. [http://www.cmosexod.com/board.htm] For synthesis I use Max+plusII version 10.0 9/14/2000 (2000.09) from Altera. It even programs OK. So what's the problem I hear you say? As inputs I have async_ser_clk and async_ser_clk, which change out of sync with the system clock of the CPLD. So on the posedge of the system clock I latch those to two registers to synchronize them. For debugging purposes I also latch them into two of the four led displays. On the negedge of the system clock I then check if the synchronized ser_clock has gone from low to high, and if so, shift the register. Now the problem is this: It does seem to synchronize the ser_clock, and does _something_ on the low to high transition. Only that something should be shift the register, like this: (See also the appended source-code) led_1[7] <= led_1[6]; led_1[6] <= led_1[5]; ... led_0[1] <= led_0[0]; led_0[0] <= ser_dat; What it seems to do instead is: led_1[7] <= ser_dat; led_1[6] <= ser_dat; ... led_0[1] <= ser_dat; led_0[0] <= ser_dat; It doesn't shift, it changes _ALL_ 16 bits of the 'shift'-register to either 1 or 0, the value of ser_dat. Oh, and before I forget, I don't think it's a timing problem, since for debugging I use a system clock of 200 Hz, and change the async_ser_clk and async_ser_dat on the touch of a button (== as slow as I want). So did I make a stoopid mistake somewhere, and should it compile to this, or is something else going on? any help appreciated, Fred --- /* * shift_leds.v - shift register with led display * */ `timescale 1ns / 100ps module shift_leds(sys_clk, sys_rst_l, async_ser_clk, async_ser_dat, led_3, led_2, led_1, led_0); input sys_clk; input sys_rst_l; input async_ser_clk; // serial clock (has to be resynced) input async_ser_dat; // serial data (has to be resynced) output [7:0] led_3; output [7:0] led_2; output [7:0] led_1; output [7:0] led_0; wire sys_clk; wire sys_rst_l; wire async_ser_clk; wire async_ser_dat; reg [7:0] led_3; // Left led reg [7:0] led_2; reg [7:0] led_1; reg [7:0] led_0; // Right led reg ser_clk, ser_dat, last_ser_clk; always @(sys_clk) begin if (sys_clk) begin // @(posedge sys_clk) --> synchronize ser_clk and ser_dat last_ser_clk <= ser_clk; ser_clk <= async_ser_clk; ser_dat <= async_ser_dat; end else begin // @(negedge sys_clk) --> use synchronized ser_clk and ser_dat led_3[7] <= ser_clk; led_3[6] <= ser_clk; led_3[5] <= ser_clk; led_3[4] <= ser_clk; led_3[3] <= ser_clk; led_3[2] <= ser_clk; led_3[1] <= ser_clk; led_3[0] <= ser_clk; led_2[7] <= ser_dat; led_2[6] <= ser_dat; led_2[5] <= ser_dat; led_2[4] <= ser_dat; led_2[3] <= ser_dat; led_2[2] <= ser_dat; led_2[1] <= ser_dat; led_2[0] <= ser_dat; if ((~last_ser_clk) && (ser_clk)) begin // @(posedge synchronized ser_clk) // shift ser_dat through // led_1 <-- led_0 <-- ser_dat led_1[7] <= led_1[6]; led_1[6] <= led_1[5]; led_1[5] <= led_1[4]; led_1[4] <= led_1[3]; led_1[3] <= led_1[2]; led_1[2] <= led_1[1]; led_1[1] <= led_1[0]; led_1[0] <= led_0[7]; led_0[7] <= led_0[6]; led_0[6] <= led_0[5]; led_0[5] <= led_0[4]; led_0[4] <= led_0[3]; led_0[3] <= led_0[2]; led_0[2] <= led_0[1]; led_0[1] <= led_0[0]; led_0[0] <= ser_dat; end // @(posedge synchronized ser_clk) end end // always @ (sys_clk) endmodule // shift_leds /* * main.v - testbench for shift_leds.v * */ `timescale 1ns / 100ps module main; parameter ALTERA_HALF_T=5; // T=2*5ns=10ns --> 100 MHz parameter AVR_HALF_T=53; // T=2*53ns=106ns --> 9.43MHz // simulate 2 ports on the AVR acting as serial bus reg ser_clk; reg ser_dat; // use leds on post-it as outputs wire [7:0] led_3; wire [7:0] led_2; wire [7:0] led_1; wire [7:0] led_0; // system clock and reset button reg sys_clk; reg sys_rst_l=1; // Button not pressed // simulation helper variables reg [31:0] value; integer i, j; // instantiate modules shift_leds my_shift_leds(sys_clk, sys_rst_l, ser_clk, ser_dat, led_3, led_2, led_1, led_0); initial begin $dumpfile("shift_leds.vcd"); $dumpvars(0, main); // AVR sending 32 bits over serial bus value=32'h76543210; // value to write to shift register for (i=0; i<32; i=i+1) begin ser_clk = 0; ser_dat = value[31-i]; // shift MSB first #AVR_HALF_T; ser_clk = 1; #AVR_HALF_T; end end // generate system clock for Altera initial begin for (j=0; j<((32*AVR_HALF_T)/ALTERA_HALF_T)+2; j=j+1) begin sys_clk = 0; #ALTERA_HALF_T; sys_clk = 1; #ALTERA_HALF_T; end end endmodule // mainArticle: 34466
On Fri, 24 Aug 2001 17:58:09 -0700, Austin Lesea <austin.lesea@xilinx.com> wrote: > >--------------B555A475B383CCA24BB5E365 >Content-Type: text/plain; charset=us-ascii >Content-Transfer-Encoding: 7bit > >Mark, > >(snip) > >> >> >> 1) Is the access time / CLOCK to DOUT time for the BRAMS faster when set >> >> for a wide data configuration? >> >> (ie presumably the output signals can skip 4 levels of multiplexers) >> > >> >no clue, but the speeds files would tell you (ie timing report) >> > >> Yes but only one Tbcko is reported by the software. >> I was interested in the timing of the hardware. >> I don't believe that there aren't any differences between >> either the clock to data for all data lines for all configurations, >> or the address to clock setup for all address lines for all configurations. >> I could be wrong, should the BRAMS contain a large number of FFs >> to synchronise all lines. >> The matter might be irrelevant, such as only a few pS difference, which >> would get swamped by the net delays. >> The net delays might be INERTIAL, in which case the swamping would be more. > >There is no difference. You may believe what you wish. So is there zero skew on the Data out lines ? Or is the skew the difference between Tbcko max & min (1.25ns approx) ? Or do we use a margin such as 10% of Tbcko max (0.33ns) ? > >> >> >> >> 2) is the carry out pin of a CLB in a LOW/HIGH/TRI STATE if not selected to >> >> be used in a 1 CLB hard macro? >> > >> >all logic is on, or off. even the tbufs are not tristate >> > >> Makes sense, obviously bad for CMOS lines to be floating within high speed >> logic. >> So I can initialise a carry chain even when the BX input is being used for >> something else? > >Don't know. > >> >> >> >> 3) whats the point of the CIN pins in the lowest row of slices? >> >> (ie with highest row index) >> > >> >No idea. I do know that signals that don't go anywhere are tied off by the >> >special term cells that go all around the clb array >> > >> I never understood about the tie-points for the 4000 series. >> Neither did the software. > >These are special term cells which complete the routing whenever there is a discontinuity in the >fabric. Has nothing to do with programming, or the software. Yes, but I didn't understand why these couldn't be combinational functions embedded into the routing (or the pin lines), rather than strange fixed points that were supposed to be routed to. The problem with the M1 software for the 4000 series was that the Tying module would refuse to run with a design that had an irrelevant DRC error. (such DRC errors would pop up when I absorbed an invertor into the H-LUT of a CLB containing a RAM. The H-LUT would normally contain just a MUX, and the DRC would consider anything else but this to be very bad. This wouldn't have been any problem, but some of the RAMS I was using only needed 3 address lines. The spare line was connected to GND or VCC (doesn't matter which.) The routing was incomplete (just for these spare lines)... so they needed to be tied. So I requested Xilinx uk tech support to reduce the DRC to a mild warning, but they weren't interested. (This was several years ago..) There weren't any such problems with the Xact6 software, but the Xact6 didn't support the 4000EX & 4000XL devices. > >> >> >> >> 4) if the answer to the 3rd question is NONE then you might be amused >> >> to try smart-placing (within FPGA_EDITOR) a hard macro which uses a >> >> CIN (seems to like the lowest row for some reason!) >> > >> >so what happens? >> > >> I stopped using "smart-placing". >> > >> >> 5) The Spartan2 databook is very unclear about IO banks & PQ packages. >> >> Presumably the Vref & Vccio arrangements are as with the Virtex? >> >> (which was at least clearly documented) >> > >> Sorry , unclear question. What I should have written would be: >> I understand that within the Virtex devices there are 8 I/O Banks, >> each potentially with their own Vccio & Vref lines. >> Within some packages the Vccio lines for different banks are bonded together . >> The exteme case is with the HQ/PQ packages, with just 1 Vccio line, >> but still with independent Vrefs. >> I guess that the same applies for the Spartan2, but if I hadn't seen the >> Virtex documentation I might not even have made that guess. >> Am I correct in assuming independent Vrefs for the Spartan2 PQ >> packages? > >Need to check the datasheet. Packages in Spartan usually have more IO's, and hence fewer >options, and less robust ground bounce, and tighter SSO rules. > >> >spartan is always more io bonded out, with fewer vcc's and gnds. if you want >> >performance, then you buy Virtex. If you want low cost, you buy Spartan II. >> > >> >> I want high processing performance and (generally) low I/O data rates. >> Hardly any of the OBUFS will be set to FAST, so I won't be ground bouncing. >> Spartan2-6 is the most cost effective solution. >> (better performance than Virtex-5 ) Note correct spelling of Virtex (tm) Sorry! (Even worse .. I've made a grammatical error elsewhere. Too much Startrek!) > >> >> >> >> 6) Any chance of filling in some of the blank fields for Spartan2 -6 >> >> timing? >> > >> >don't know. ask Robert Wells. >> > >> Who is Robert Wells? is he accessible via robert.wells@xilinx.com ? > >I answered this to an internal person, who should never have sent it outside. They should have >asked Robert and gotten back to you. I apologize to Robert. I will ask someone in the Spartan >II program, and get back to you. Don't worry, the speedprint software DOES provide min & max timings. > >> >> >> >> 7) How about guaranteed best case timings as well ? >> >> ( specifically for BRAM access) >> > >> >some day >> > >> Sorry It's just that I could have sworn that Xilinx indicated the availability >> of this data several months ago.The indication was in tabular form. > >Maybe for Virtex, Virtex E, but I doubt it for Spartan II. I'll ask. > Don't worry, the speedprint software DOES provide min & max timings. I was obviously not the only one who didn't immediately notice it lurking in bin\nt. >> >> >> >> 8) Would it be possible to sacrifice a BRAM & use a DLL to lock onto the >> >> Tbcko , or would the delay of other BRAMS not track? (even though being >> >> guaranteed to have the same Vccint & presumably approximately the same >> >> temperature?) (bearing in mind question 1, I would only be applying this to >> >> identical >> >> data width BRAMS) If this approach is possible which data bit should I route >> >> through? >> > >> >BRAM and DLL are completely different animals, and I doubt they would track >> >with process/voltage/temperature >> > >> I meant having a clock from the output of a 2xDLL fed by the output of a BRAM >> whose sole duty is to toggle 1 output bit per input clock cycle. >> I.e. sanitising a dirty source in a highly controlled manner. >> The delay between the two clocks would depend on Tbcko. > >The BRAM is synchronous, so where does its clock come from? Any clean source, most probably from another DLL, but perhaps from a small Virtex II device ( 1 per 4 Spartan II devices for economic reasons). The Virtex II DCMs look very useful, as do their multipliers, as do their I/O standards. (shame they cost more than the Spartan II ) > >> > >> >> 9) Do the industrial versions draw more startup current than the commercial >> >> devices at any given temperature, or is it only at extremely low >> >> temperatures (below the commercial range) that the current increases? >> > >> >just below 0C >> >> OK, so this should never be a reason to NOT select an Industrial part.. >> >> > >> >> >> >> 10) Using XST VHDL (SP8, on both WebPACK & ISE3.3i) RLOCS are lost on >> >>all but >> >> the first instance of a component containing RLOCS. This is BAD , especially >> >> when Carry logic and / or MUXF5 is involved, because what creeps into the >> >> wrong CLB makes no sense at all (as regards timing/space) . I originally >> >> thought this was a problem with MAP, but then examined the EDN file & >> >>found >> >> that MAP hadn't been told to cluster components within a CLB. (for any except >> >> the first instance) >> >> "Preserve Heirarchy" had been SET (as a synthesis option). Presumably the >> >>only way around this is to use "Incremental Synthesis"? (Apart from hacking >> >> the edn file to ensure all instances refer to the first cell definition,which >> >> does indeed work, but surely can't be considered a standard flow!) but >> >> incremental synthesis doesn't work for any OTHER component! I haven't tried >> >> using the XILFILE attribute which could be a way to enforce the correct >> >> hierarchy. >> > >> >This is a software question, right? I don't do software. >> > >> Sorry. I thought it was a bug report ! > >That is handled by opening a webcase over the web. Ok. > >> >> >> >> 11)The P&R tools seem to consider (single port) RAM address lines to be >> >> unswappable. Is there any way around this, because almost invariably I >> >> find that >> >> swapping (ie deleting net pins then adding net pins) the two highest >> >> net delay address inputs improves the timing on BOTH lines. >> > >> >There may be a dedicated route reason why this isn't allowed. I just don't >> >know. >> > >> It is allowed, but the tools don't do it themselves. >> I'll try pretending to MAP that the RAMs are LUTs, then patch the >> file after it has been routed. >> The P&R is very good about swapping LUT inputs to improve timing. >> The difficulty will be in ensuring all placement info gets through the tool >> chain. >> I'd pity someone targeting the xc2v10000 & using XDL. >> >> >> >> >> 12) Any chance of a job at Xilinx? (I can write tools. I can design hardware. >> >> I like Xilinx hardware.) I'm afraid that jobs locally to me are of the sort >> >> "You've got to use ACTEL" (probably something to do with the name of their >> >> main product line.) or "You've got to use Virtex" ( note not Virtex-E, >> >> Spartan2 or Virtex2 but Virtex). or "You've got to use ALTERA" or "You've >> >> got to have marketing experience". I can supply the names of the guilty >> >> parties, but I don't think that would help. >> > >> >There is always a chance. Got a resume? I can't say that it is a good chance, >> >or if it is a bad chance, but there is a chance, yes. >> > >> Ok, I'll just delete the section in which I indicate my bitterness as regards >> some versions of Xilinx software! >> > >Don't do that! Why would we want to hire someone who does not honestly represent their views. > >Austin >Article: 34467
This was an email sent to Peter.Alfke@Xilinx.com.... Dear Sir, I am sending you (as the sane technical voice of Xilinx) a few questions and one bug report. I approve in advance should you wish to post any part of this or subsequent emails onto the comp.arch.fpga newsgroup to assist others. Questions mainly about Spartan2... 1) Is the access time / CLOCK to DOUT time for the BRAMS faster when set for a wide data configuration? (ie presumably the output signals can skip 4 levels of multiplexers) 2) is the carry out pin of a CLB in a LOW/HIGH/TRI STATE if not selected to be used in a 1 CLB hard macro? 3) whats the point of the CIN pins in the lowest row of slices? (ie with highest row index) 4) if the answer to the 3rd question is NONE then you might be amused to try smart-placing (within FPGA_EDITOR) a hard macro which uses a CIN (seems to like the lowest row for some reason!) 5) The Spartan2 databook is very unclear about IO banks & PQ packages. Presumably the Vref & Vccio arrangements are as with the Virtex? (which was at least clearly documented) 6) Any chance of filling in some of the blank fields for Spartan2 -6 timing? 7) How about guaranteed best case timings as well ? ( specifically for BRAM access) 8) Would it be possible to sacrifice a BRAM & use a DLL to lock onto the Tbcko , or would the delay of other BRAMS not track? (even though being guaranteed to have the same Vccint & presumably approximately the same temperature?) (bearing in mind question 1, I would only be applying this to identical data width BRAMS) If this approach is possible which data bit should I route through? 9) Do the industrial versions draw more startup current than the commercial devices at any given temperature, or is it only at extremely low temperatures (below the commercial range) that the current increases? 10) Using XST VHDL (SP8, on both WebPACK & ISE3.3i) RLOCS are lost on all but the first instance of a component containing RLOCS. This is BAD , especially when Carry logic and / or MUXF5 is involved, because what creeps into the wrong CLB makes no sense at all (as regards timing/space) . I originally thought this was a problem with MAP, but then examined the EDN file & found that MAP hadn't been told to cluster components within a CLB. (for any except the first instance) "Preserve Heirarchy" had been SET (as a synthesis option). Presumably the only way around this is to use "Incremental Synthesis"? (Apart from hacking the edn file to ensure all instances refer to the first cell definition,which does indeed work, but surely can't be considered a standard flow!) but incremental synthesis doesn't work for any OTHER component! I haven't tried using the XILFILE attribute which could be a way to enforce the correct hierarchy. 11)The P&R tools seem to consider (single port) RAM address lines to be unswappable. Is there any way around this, because almost invariably I find that swapping (ie deleting net pins then adding net pins) the two highest net delay address inputs improves the timing on BOTH lines. 12) Any chance of a job at Xilinx? (I can write tools. I can design hardware. I like Xilinx hardware.) I'm afraid that jobs locally to me are of the sort "You've got to use ACTEL" (probably something to do with the name of their main product line.) or "You've got to use Virtex" ( note not Virtex-E, Spartan2 or Virtex2 but Virtex). or "You've got to use ALTERA" or "You've got to have marketing experience". I can supply the names of the guilty parties, but I don't think that would help. Respectfully yours, Mark Taylor Mark_Taylor_Chess@compuserve.com 101551.3434@compuserve.comArticle: 34468
it looks like what you wrote was an asynchronous shift register, and i am still not sure if it is shifting. anyways, since you are practicing your verilog and/or learning the language, then i suggest it is probably easier to start of with a synchronous shift register. in verilog, it would usually look something like this: module SerIn_ParallelOut_ShiftReg( CLK, DIN, DOUT, ); input CLK; input DIN; output [3:0] DOUT; reg [3:0] DOUT; reg [3:0] REG; always @(posedge CLK) begin REG <= {REG[2:0], DIN}; DOUT <= REG; end endmodule in the module, the bottom 3 bits of REG get moved up one bit place and the last spot (bit 0) gets filled with DIN. anyways, this is just an example and i recommend that whatever you do, try and stay synchronous as much as possible. it looks like you are playing asynchronous games, and also, you are playing games with the clock which is always pretty scary to do. skitzArticle: 34469
I'm looking for available PCI-X systems to be used for testing a prototype adapter card. I know about the Compaq Proliant servers, but they are quite expensive. ServerWorks makes a chipset called "Grand Champion" (if memory serves me correctly). Are there any motherboards based on the Grand Challenge available yet? TIA Petter -- ________________________________________________________________________ Petter Gustad 8'h2B | (~8'h2B) - Hamlet in Verilog http://gustad.comArticle: 34470
Hello all, I am going to program a Xilinx FPGA for the first time. I wanna setup the synthesis tools on a pc, what must be the system's requirements for proper (and relatively fast) function? ThanksArticle: 34471
Good Morning, I've used Matlab and it's function RCOSINE to design a SRRC filter interpolating 6 and having a roll-off of 0.35, it automatically choose to implement it in a 37 tap filter, so in the implementation I've arranged 6 fir, the first with 7 taps and the other five with 6 taps, with the coefficient distribuited in the classical way of the polyphase filter, the first on the first filter, the second on the second filter and so on, the problem is that in this way it doesn't work cause to maintain the vhdl synchronism between the values produced by the FIRs they had to have the same length so I rearrange to have all filter with 7 taps and the last 5 FIR have the last coefficient value = 0, in this way it works perfectly, so my question, is why matlab produce a 37 value rcosine filter and how I can rearrange to impose a 42 tap filter thanks ... AntonioArticle: 34472
> > > > > Yet the majority of designs are done in HDL for > > > > > good reasons: less effort, better results. > > > > > > > > You have no proof of this. > > > > > > That's correct, for I don't think such things can be proven. Do you? > > > > Then why did you make a claim that you know isn't true? > > As Godel showed, propositions can be both true and unprovable. Well, then I clam that schematics are less effort and give better results. > > I've done a number of designs in both, and schematics win hands down to > > finished product every time, except for the simplest/slowest designs. > > Back when synthesis got started, designers were drawing transistors on > schematics and wanted a faster way to an acceptable answer. Sure, by > calculations and experience the human designer can produce a nearly optimal > design for a bit of logic (say for the carry bit of an ALU, perhaps some 20 or > 30 transistors, using many hours of human designer time: but a simple synthesis > tool can do nearly as well in seconds. By freeing the designer from the tiny > details of transistors and gates, much more complex designs could be completed. I don't get your point. There is no need to draw gates with schematics, any more so that you do with HDLs. > > There can be way too much > > fussing with timing and tools etc. with synthesis. When synthesis works, it > > works, and that's nice...but there is no guarantee that it will, and if it > > doesn't, the amount of time it takes to get it to work can be exponential, > > or even complete failure. > > When I write code I know fairly closely what the synthesis tool is going to > produce, You DO? How do you know that? Do you look at the gates that the synthesis tool generates? I'm skeptical... > and how fast it will run. This takes experience, More than experience. It takes only using one revision of one tool! The synthesized code will give different results based on a LOT of variables, even point revisions of the tools. You can't guarantee that. Unless there is a document that synthesis tool vendors provide that shows EXACTLY what code is generated for each construct, what you suggest just isn't going to work. > but so does "proper > use of schematics", correct? Er, no it doesn't. Same schematic, same results. > And yes, I'm sure that an inexperienced HDL > designer can waste a lot of time fussing with timing and tools and such. Sorry, but one does not have to be inexperienced in HDL to waste a lot of time. > I know that an inexperienced schematic designer can waste a lot of time > fussing with timing and tools and such. I did that a decade ago. Then perhaps learining the right way to use those tools would have been to your benefit!.Article: 34473
Hi. I am not entirely new to pld's, but I have never worked with any kind of fpga, and I wonder: I want to use something in the range XC2S200 in my design, I have seen experiment boards equipped with this chip, that are programmed by xilinx programmer cable, but these fpga:s don't store the bitfile internally AFAIK, so some EEPROM has to be added? how is this done? how does programming then work (and erase)?
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z