Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
PREP was an unmitigated disaster. It started with the naive idea that the capacity and speed of any and all programmable devices, from simple CPLDs to big XC3090s, could be measured by implementing a design consisting of concatenated MSI blocks. Then we underestimated the power of marketing, and Altera's "creativity" ( I have a long list of dirtier words for it), who claimed virtual designs with more than 100% utilization of the existing circuitry. Made me lose all respect for them. Different from microprocessors, programmable logic comes in widely different sizes and flavors, and is being used for widely differing purposes. I stick with my original story that a single-number measurement for capacity ( or speed ) is as meaningless as 38-24-36. That's why the industry still needs smart engineers who can evaluate and select the right device for the right job. And app notes and newsgroups to explain the important subtleties. Peter Alfke ====================================== Allan Herriman wrote: > > > About a decade ago there was a benchmark called "Prep" but it isn't used > anymore. I'm not quite sure why, but I think it had something to do > with Prep not being a realistic measure of the performance of a part. > IIRC, one of the measures was the number of 16 bit counters that would > fit. Obviously, this mapped quite well to one particular vendor's part > that has groupings of 16 flip flops. > > I propose a new benchmark: the number of traffic light or vending > machine controllers that will fit in the FPGA. :) > > Regards, > Allan.Article: 31251
vi schrieb: > > Can anyone supply me or direct me to a source that has a 24-bit > comparator(GT or LT output) in schematic format for FPGA? I use Xilinx. So use the core generator to get your comparator. -- MFG FalkArticle: 31252
Harvey Twyman wrote: > <snip> > > The one thing I've learnt from this exercise is that > if you post to newsgroups you must expect to be > crucified if you 'do' or 'say' anything slightly > wrong. > > I feel that it's unlikely that good comments are ever > made through this type of media. I disagree. I find this a very helpful, cooperative, and even friendly newsgroup. It treats sophisticated questions with respect, and newbies even with love. We may, however, sometimes react negatively to pomposity, and "know-it-almost-all" statements. And some of us, present company included, have strong opinions on certain subjects. But we do make "good comments". Peter AlfkeArticle: 31253
I did it by generating the VHDL file from coregen. I compiled it on the command line. In renoir, I created a standard library and pointed the directory to the one that I compiled. I also had to do the same with the xul library. The new HDL Designer has some hooks in it to make importing from Coregen easier. I haven't tried it yet. I usually end up using the Moduleware stuff that comes with it. Jean-Pierre Gehrig (redbull@netplus.ch) wrote: : I'm trying to use a dual port RAM generated by the Xilinx Core Generator for : a Spartan-II. : Does anybody knows how to create a component from the VHO file with Renoir : (I use Leonardo for synthesis)? : Thanks, : Jean-Pierre Gehrig : Design Engineer : HEVs Switzerland -- VaughnArticle: 31254
Iwo Mergler schrieb: > > IIRC, the tracks on your card should be routed above a continuos > power plane and must be shorter than 1 inch. The PCI clock signal > must be 1.5 inches long. All signals drive exactly one CMOS input. > I might be completely wrong. I think so. I dont have the PCI spec (and never read them :-( but Iam sure that this wont work. It is called PCI-BUS, so one signal cant drive only 1 input. Also, the distance between the PCI slots in my PC is greater than 1 inch. -- MFG FalkArticle: 31255
Andrew Webb schrieb: > > Dear All, > > I have been asked by the FPGA developers at my company to recommend a > suitable platform and configuration to minimise the time they have to spend > waiting for XILINX compilations. They currently have to wait around 8 hours. > We tried a P4 with RAMBUS memory, but the performance increase over a P3 > wasn't that great. Any ideas ... my boss gave me this brief. UHHHH, sounds like REALLY BIG designs (and FPGAs). What type is it? Virtex-E 3200?? Anyway, I think a possible solution is to split the design into small parts, test and compile them, and then put the modules together WITHOUT running a Place&Route for the Design at all. DAMIT, I dont know at the moment how is this called in english. Peter, HILFE ;-) -- MFG FalkArticle: 31256
In article <3B019429.3FEE8592@cyberspook.freeserve.co.uk>, pjc@cyberspook.freeserve.co.uk (cyber_spook) wrote: > And lastly - is there somwhere I can find this information? The PCI spec will tell you what the rules are. If you break them, you will get away with it sometimes. The precise proportion of the time you will get away with it varies depending on which rules you break, how badly you break them, how cleverly you do it, the rest of the system and the phase of the moon. Your mother's maiden name may also be a factor. With experience, you can make a reasonable guess whether some bodge is likely to work in a particular instance, but it's not readily quantifiable. -- Steve Rencontre http://www.rsn-tech.co.uk //#include <disclaimer.h>Article: 31257
In article <ddn3gt074c21a28nbsg67aoisgj2heoq9d@4ax.com>, mhkohne@discordia.org (Michael Kohne) wrote: > I've got a stupid problem with a counter. The code is written in > Altera's AHDL language, > [...description of glitches...] > IF (XMTN_R63) THEN -- XMTN_R63 is a registered copy of an input > FADCNT[18..0].d = 0; > ELSE > FADCNT[18..0].d = FADCNT[18..0].q + 1; > END IF; First thought - scrap that code and use LPM instead. Second thought - redesign to be fully synchronous and therefore tolerant of glitches. If you /really/ can't tolerate any glitches on your counter output, you have no alternative but to use a Gray code. -- Steve Rencontre http://www.rsn-tech.co.uk //#include <disclaimer.h>Article: 31258
On Wed, 16 May 2001 13:54:04 +0100, Rick Filipkiewicz <rick@algor.co.uk> wrote: > > >Utku Ozcan wrote: >> >> Rick Filipkiewicz wrote: >> >> > For some time now I've noticed that every now & again PAR would hang up >> > in the ``route PWR/GND'' phase of iteration 2. Or, at least, progress >> > would slow to a crawl. Since I've only got 722 such connections none of >> > the answers on the web seemed at all relevant. >> >> Rick, >> >> If I remember correctly, there was a discussion on PWR/GND distribution >> at this newsgroup long ago (I don't know if you can find these postings >> in http://groups.google.com (formerly http://www.deja.com) since they lost >> old mails or whatever). >> >> Can it be the problem how PWR/GND is modelled in HDL? Maybe 722 PWR/GND >> connections is used from only one Verilog assignment. Or, are these >> potentially unroutable connections centralized within some small region >> (module) of the chip? Maybe warning/errors of unrouted PWR/GND might >> inform on it. >> >> Utku > >But, Utku, why only on a Sunday ? Are you using a networked version of the software. On Sundays, the IP folks may be using the network for backing up drives and such. Ralph Watson Return Email Address is: ralphwat dot home at excite dot com just type the address in like it should look likeArticle: 31259
andrew, the first thing to check is your memory usage. i see you only mentioned available memory on the HP Vectra. and that was only 0.5 GB. increasing this could be an easy way to reduce compile times on all machines. in my experience, the 2 things that affect compile times the most are memory and CPU speed. brant Andrew Webb wrote: > Dear All, > > I have been asked by the FPGA developers at my company to recommend a > suitable platform and configuration to minimise the time they have to spend > waiting for XILINX compilations. They currently have to wait around 8 hours. > We tried a P4 with RAMBUS memory, but the performance increase over a P3 > wasn't that great. Any ideas ... my boss gave me this brief. > > Thanks, > > Andrew Webb > Thales Defence > > ...... > Our Software > Xilinx Foundation 3.1I with SP7 > > Devices > Devices are xcv1000, xcv1600e-6 > > The Problem > Our iteration cycles can take up to 8 hours (includes our final timing > constraints). We currently use Intel PIII 800MHz to 1GHz PCs. > > Our Initial Solution > Part 1 > We purchased a HP Vectra 800VL which is an Intel 1.5GHz P4, 0.5GB RAM > (400MHz Front Side Bus), SCSI 160 Controller. The initial installation was a > problem but we found the fix on your Web Site (java error) and then found > that our expectation of much reduced compilation times was not met. Design 1 > (See Mapping Report File) took 3 hours 18min on a PIII and 2 hours 40min on > a P4. We were hoping for much more as we were assuming the 400MHz FSB would > improve CPU to Memory operations during compilation. > We now understand that you are releasing a new version of Xilinx that will > be optimised for P4 architectures in August. > > Part 2 > We have since been steered towards the AMD processors, which we are testing > today. > > Our Aim > We need to reduce this iteration time to allow us to make more changes to > our designs within one day. Currently we are relaxing the timing constraints > during the initial stages, but having to then place our final constraints > within the design to meet the overall system requirements. This is where the > compile time increases to 8 hours. > > Our Questions > Can you advise on the compilation platforms to yield the best performance, > short and long term. > Have you any hint or tips on Operating System optimisation or choice of O.S. > I want to take a more strategic view of this issue and formulate plan for > when we begin using even bigger gate Arrays.Article: 31260
> It should reay have a short bus with minimal stubs to each card / device > as you see in a standard pc, with good grouding and tracks layed > correctly. corret? Absolutely. As has already been mentioned: having a look at the PCI spec might give you some clues.:-) Still since I am not a PCB designer, it is quite hard to understand how all the requirements mentioned in there interact. Anyway, here are some restrictions from the spec, for the 66 Mhz case: - maximum clock skew between any 2 components (irrespective whether this is on the motherboard or on an expansion board) is 1 ns (2 ns for 33 MHz). This will require very balanced routing of the clock signal (or the use of a PLL to align the edges?) - Tprop, the maximum roundtrip delay on the bus between any 2 components, should be < 5 ns (10 ns for 33 MHz). This is probably even a bigger constraint than the one above. - total load should be limited, thus putting limits on the number of compents on a signal bus. > How long can the bus be at 66Mhz? Not specified. Timing and load are important. The corresponding length will probably vary with the quality of the material used to make the PCB, the width of the wires, capacitance etc. > How much veriation in track length can I have? (1 inch, 5 inch, 10inch > difference?) How long can a stub be? (1 inch, 5 inch?) Same thing. The only limitations for tracks are those for the lengths from the PCI connector the IO of a component on the PCI extension board. 1.5 inch for all signals except for the clock, where it should be 2.5 inch. > What problems am I ligthy to see? bit errors, parity etc? Yes. And dead systems also. :-) > also... If I have one device that is 32bit and three that are 64bit - > should the 32bit device sit in the middle or can it sit off one end (ie > extending the lower part of the bus beond the top 32bits)? My feeling is that this should be OK. As long as you meet the timing specs. > And lastly - is there somwhere I can find this information? It's all in the PCI spec that you can buy at www.pcisig.org. Chapters 5 and 7. TomArticle: 31261
On Wed, 16 May 2001 18:33:03 +0200, Falk Brunner <Falk.Brunner@gmx.de> wrote: >Iwo Mergler schrieb: >> > >> IIRC, the tracks on your card should be routed above a continuos >> power plane and must be shorter than 1 inch. The PCI clock signal >> must be 1.5 inches long. All signals drive exactly one CMOS input. >> I might be completely wrong. > >I think so. I dont have the PCI spec (and never read them :-( but Iam >sure that this wont work. It is called PCI-BUS, so one signal cant drive >only 1 input. Also, the distance between the PCI slots in my PC is >greater than 1 inch. I think he meant the on-card stub. Anyway, I happen to be debugging a PCI board right now (hangs the system on boot - hate that!) and have open my copy of _PCI_System_Architecture_ (MindShare). According to Shanley and Anderson: Maximum Card Trace Lengths: - All signals on the 32bit portion of the bus must be no longer than 1.5". - All 64-bit extension signals must be no more than 2". - PCI CLK signal trace length must be 2.5" +/- .1" (hmm) and connected to only one load. ---- KeithArticle: 31262
Tom <tomcip@concentric.net> writes: > Ask some questions of your customer. What is so important about > using the Actel Anti-fuse technology? What is so iimportant about > using ANY technology. Paranoia. > The objective of design engineering is to design that which the > customer requires within their cost and functionality > constraints. What difference does it make which technology is used > if the technology works and is affordable? Does the customer have a > predjudice? Most certainly yes. > Did the customer read some kind of markting blurb from Actel? Do > they own stock in Actel? Never believe the marketing hype. I think it's a design security thing. $DIETY knows why because there won't be many units and they'll all be in access controlled areas. I've been playing with the free Actel Desktop software the last couple of days and in some respects it's a big step backwards from the Xilinx stuff, especially when it comes down to revision control. I hope the higher end software is somewhat better than this. Chris -- Chris Eilbeck mailto:chris@yordas.demon.co.uk MARS Flight Crew http://www.mars.org.uk/ UKRA #1108 Level 1 BSMRArticle: 31263
rk <stellare@nospamplease.erols.com> writes: **snip** Thanks all. Chris -- Chris Eilbeck mailto:chris@yordas.demon.co.uk MARS Flight Crew http://www.mars.org.uk/ UKRA #1108 Level 1 BSMRArticle: 31264
gah@ugcs.caltech.edu (glen herrmannsfeldt) writes: > Neil Franklin <neil@franklin.ch.remove> writes: > > >Eric Smith <eric-no-spam-for-me@brouhaha.com> writes: > > >> Bzzzt! No longer commercially available. > > >Not any more? Bummer. > > >All the more reason for me to get on with my clone then. :-) > > There is now a software emulator that runs under most unix > systems. The remaining bugs are rapidly being worked out. Actually multiple of them: TS-10, E-10 and simh2.6 But emulators are not quite the same thing as real hardware. :-) > I forget if they have TOPS-20 running yet. Maybe only TOPS-10. AFAIK theye are all on TOPS-10 7.03, even 7.04 is crashing all of them. > See the pdp10 newsgroup. alt.sys.pdp10, which I read. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Sysadmin, Archer, Roleplayer - Intellectual Property is Intellectual RobberyArticle: 31265
Peter Alfke wrote: > > PREP was an unmitigated disaster. It started with the naive idea that the > capacity and speed of any and all programmable devices, from simple CPLDs to big > XC3090s, could be measured by implementing a design consisting of concatenated > MSI blocks. Then we underestimated the power of marketing, and Altera's > "creativity" ( I have a long list of dirtier words for it), who claimed virtual > designs with more than 100% utilization of the existing circuitry. Made me lose > all respect for them. > > Different from microprocessors, programmable logic comes in widely different > sizes and flavors, and is being used for widely differing purposes. I stick with > my original story that a single-number measurement for capacity ( or speed ) is > as meaningless as 38-24-36. > That's why the industry still needs smart engineers who can evaluate and select > the right device for the right job. And app notes and newsgroups to explain the > important subtleties. > > And someone like you to act as our collective memory & keep us honest.Article: 31266
Newsbrowser wrote: > > On Wed, 16 May 2001 13:54:04 +0100, Rick Filipkiewicz > <rick@algor.co.uk> wrote: > > > > > > >Utku Ozcan wrote: > >> > >> Rick Filipkiewicz wrote: > >> > >> > For some time now I've noticed that every now & again PAR would hang up > >> > in the ``route PWR/GND'' phase of iteration 2. Or, at least, progress > >> > would slow to a crawl. Since I've only got 722 such connections none of > >> > the answers on the web seemed at all relevant. > >> > >> Rick, > >> > >> If I remember correctly, there was a discussion on PWR/GND distribution > >> at this newsgroup long ago (I don't know if you can find these postings > >> in http://groups.google.com (formerly http://www.deja.com) since they lost > >> old mails or whatever). > >> > >> Can it be the problem how PWR/GND is modelled in HDL? Maybe 722 PWR/GND > >> connections is used from only one Verilog assignment. Or, are these > >> potentially unroutable connections centralized within some small region > >> (module) of the chip? Maybe warning/errors of unrouted PWR/GND might > >> inform on it. > >> > >> Utku > > > >But, Utku, why only on a Sunday ? > > Are you using a networked version of the software. On Sundays, the IP > folks may be using the network for backing up drives and such. > > Ralph Watson > Return Email Address is: > ralphwat dot home at excite dot com > just type the address in like it should look like It could be but its unlikely since the Athlon wonderbox I use for this is my home machine & only really functions as a compute server under control of makefiles on my ca. 1996 PI-150 running BSDI Unix. I don't do any local backups but periodically rsync to the main office servers. I've run task manager to look at what PAR's doing during these strange events & I see it using 99% of the CPU time. This is not in itself decisive since who knows how NT calculates this & there maybe something in the BG that's using all the memory bandwidth. However if it were to be some other process then I would expect, on probabilistic grounds, to get hang-ups in different places & not the PWR/GND phase every time.Article: 31267
Peter Alfke wrote: > > Harvey Twyman wrote: > > > <snip> > > > > The one thing I've learnt from this exercise is that > > if you post to newsgroups you must expect to be > > crucified if you 'do' or 'say' anything slightly > > wrong. > > > > I feel that it's unlikely that good comments are ever > > made through this type of media. > > I disagree. > I find this a very helpful, cooperative, and even friendly newsgroup. > It treats sophisticated questions with respect, and newbies even with love. > We may, however, sometimes react negatively to pomposity, and > "know-it-almost-all" statements. > And some of us, present company included, have strong opinions on certain > subjects. > > But we do make "good comments". > > Peter Alfke Harvey, If you want to see how even the grown ups can get a bit of GBH-with-love have a look at all the Spartan2 threads of some time ago. Xilinx, Peter A. included, got a pretty good hammering over that but are still with us. As to ``good comments'' - I've lost count of the number times I've been saved hours of work by some comment on this NG. I think there *are* are number of things that will be treated with a certain amount of disdain & maybe outright abuse: o Job adverts from recruitment agencies. o Anything that smacks of marketing voodoo. o Someone trying to get their homework assignments done. and possibly: o Those who don't understand Shannon.Article: 31268
--------------3E12C21E4EB12141A1EA123A Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=10632 Austin Andrew Webb wrote: > Dear All, > > I have been asked by the FPGA developers at my company to recommend a > suitable platform and configuration to minimise the time they have to spend > waiting for XILINX compilations. They currently have to wait around 8 hours. > We tried a P4 with RAMBUS memory, but the performance increase over a P3 > wasn't that great. Any ideas ... my boss gave me this brief. > > Thanks, > > Andrew Webb > Thales Defence > > ...... > Our Software > Xilinx Foundation 3.1I with SP7 > > Devices > Devices are xcv1000, xcv1600e-6 > > The Problem > Our iteration cycles can take up to 8 hours (includes our final timing > constraints). We currently use Intel PIII 800MHz to 1GHz PCs. > > Our Initial Solution > Part 1 > We purchased a HP Vectra 800VL which is an Intel 1.5GHz P4, 0.5GB RAM > (400MHz Front Side Bus), SCSI 160 Controller. The initial installation was a > problem but we found the fix on your Web Site (java error) and then found > that our expectation of much reduced compilation times was not met. Design 1 > (See Mapping Report File) took 3 hours 18min on a PIII and 2 hours 40min on > a P4. We were hoping for much more as we were assuming the 400MHz FSB would > improve CPU to Memory operations during compilation. > We now understand that you are releasing a new version of Xilinx that will > be optimised for P4 architectures in August. > > Part 2 > We have since been steered towards the AMD processors, which we are testing > today. > > Our Aim > We need to reduce this iteration time to allow us to make more changes to > our designs within one day. Currently we are relaxing the timing constraints > during the initial stages, but having to then place our final constraints > within the design to meet the overall system requirements. This is where the > compile time increases to 8 hours. > > Our Questions > Can you advise on the compilation platforms to yield the best performance, > short and long term. > Have you any hint or tips on Operating System optimisation or choice of O.S. > I want to take a more strategic view of this issue and formulate plan for > when we begin using even bigger gate Arrays. --------------3E12C21E4EB12141A1EA123A Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> <a href="http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=10632">http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=10632</a> <p>Austin <p>Andrew Webb wrote: <blockquote TYPE=CITE>Dear All, <p> I have been asked by the FPGA developers at my company to recommend a <br>suitable platform and configuration to minimise the time they have to spend <br>waiting for XILINX compilations. They currently have to wait around 8 hours. <br>We tried a P4 with RAMBUS memory, but the performance increase over a P3 <br>wasn't that great. Any ideas ... my boss gave me this brief. <p>Thanks, <p>Andrew Webb <br>Thales Defence <p>...... <br>Our Software <br>Xilinx Foundation 3.1I with SP7 <p>Devices <br>Devices are xcv1000, xcv1600e-6 <p>The Problem <br>Our iteration cycles can take up to 8 hours (includes our final timing <br>constraints). We currently use Intel PIII 800MHz to 1GHz PCs. <p>Our Initial Solution <br>Part 1 <br>We purchased a HP Vectra 800VL which is an Intel 1.5GHz P4, 0.5GB RAM <br>(400MHz Front Side Bus), SCSI 160 Controller. The initial installation was a <br>problem but we found the fix on your Web Site (java error) and then found <br>that our expectation of much reduced compilation times was not met. Design 1 <br>(See Mapping Report File) took 3 hours 18min on a PIII and 2 hours 40min on <br>a P4. We were hoping for much more as we were assuming the 400MHz FSB would <br>improve CPU to Memory operations during compilation. <br>We now understand that you are releasing a new version of Xilinx that will <br>be optimised for P4 architectures in August. <p>Part 2 <br>We have since been steered towards the AMD processors, which we are testing <br>today. <p>Our Aim <br>We need to reduce this iteration time to allow us to make more changes to <br>our designs within one day. Currently we are relaxing the timing constraints <br>during the initial stages, but having to then place our final constraints <br>within the design to meet the overall system requirements. This is where the <br>compile time increases to 8 hours. <p>Our Questions <br>Can you advise on the compilation platforms to yield the best performance, <br>short and long term. <br>Have you any hint or tips on Operating System optimisation or choice of O.S. <br>I want to take a more strategic view of this issue and formulate plan for <br>when we begin using even bigger gate Arrays.</blockquote> </html> --------------3E12C21E4EB12141A1EA123A--Article: 31269
Andrew Webb wrote: > > Dear All, > > I have been asked by the FPGA developers at my company to recommend a > suitable platform and configuration to minimise the time they have to spend > waiting for XILINX compilations. They currently have to wait around 8 hours. > We tried a P4 with RAMBUS memory, but the performance increase over a P3 > wasn't that great. Any ideas ... my boss gave me this brief. > > Thanks, > > Andrew Webb > Thales Defence > > ...... > Our Software > Xilinx Foundation 3.1I with SP7 > > Devices > Devices are xcv1000, xcv1600e-6 > > <snip> > Part 2 > We have since been steered towards the AMD processors, which we are testing > today. > I can give you one data point here. Changing from a PIII-600-PC100 to an Athlon-1G3-DDR266 has speeded up PAR by 52% and post-synth/post-PAR simulations by ~45% [ModelSim-PE]. The designs aren't as big as yours but that's more a question of memory size. If you are using WinNT/Win2K then the main rule is: Never, ever, ever, let it start swapping. If you do the performance plummets to i286 levels - if you're lucky. > Our AimArticle: 31270
"cyber_spook" <pjc@cyberspook.freeserve.co.uk> wrote in message news:3B019429.3FEE8592@cyberspook.freeserve.co.uk... > I have a question regarding the bit of PCI outside the chip - the bus!? > > I know that running at 33/66 Mhz is entering the RF range of problems - > but how far can I push it, and it still work? > > It should reay have a short bus with minimal stubs to each card / device > as you see in a standard pc, with good grouding and tracks layed > correctly. corret? > > But..... > > and here are my questions:- > > How long can the bus be at 66Mhz? > How much veriation in track length can I have? (1 inch, 5 inch, 10inch > difference?) > How long can a stub be? (1 inch, 5 inch?) > How close can I lay tracks to each other or other data lines without > cross talk? > What problems am I ligthy to see? bit errors, parity etc? > > also... If I have one device that is 32bit and three that are 64bit - > should the 32bit device sit in the middle or can it sit off one end (ie > extending the lower part of the bus beond the top 32bits)? > > And lastly - is there somwhere I can find this information? > > Many - Many thanks There are three scenarios here, and from what you wrote, I can't tell which one you are trying to figure out. If you elaborate, I might be able to help. Also, as suggested, get the PCI spec, at a minimum. 1) PCI plug in card? 2) PCI motherboard? 3) Embedded PCI (no connectors)? Which one are you trying to understand?Article: 31271
Hi to all... Thanks for the answers...! You all quote the PCI spec - whats the latest vertion? 2.2? PS: this is for an embeded system with plug in cards. many thanks Cyber_Spook_Man! > It's all in the PCI spec that you can buy at www.pcisig.org. Chapters 5 and > 7. > > TomArticle: 31272
This is a multi-part message in MIME format. --------------1DC00080AA50FF8DECD1B4F1 Content-Type: multipart/alternative; boundary="------------08EA70B04C1377D45238FA57" --------------08EA70B04C1377D45238FA57 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Hello Xilinx Software users, Service Pack 8 is now available for all Platforms. To keep you software up to date with the latest features I encourage you to visit our service pack download center at support.xilinx.com. Thank you for designing with Xilinx! Roy White --------------08EA70B04C1377D45238FA57 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> Hello Xilinx Software users, <p>Service Pack 8 is now available for all Platforms. To keep you software up to date with the latest features I encourage you to visit our service pack download center <br>at <a href="http://support.xilinx.com/support/techsup/sw_updates/">support.xilinx.com</a>. <p>Thank you for designing with Xilinx! <p>Roy White <br> </html> --------------08EA70B04C1377D45238FA57-- --------------1DC00080AA50FF8DECD1B4F1 Content-Type: text/x-vcard; charset=us-ascii; name="whiter.vcf" Content-Transfer-Encoding: 7bit Content-Description: Card for Roy White Content-Disposition: attachment; filename="whiter.vcf" begin:vcard n:White;Roy tel;fax:303-442-3561 tel;work:720-652-3412 x-mozilla-html:FALSE org:Xilinx, Inc.;Global Services Division adr:;;1951 S. Fordham St.;Longmont;CO;80503;US version:2.1 email;internet:roy.white@xilinx.com title:Product Applications Engineer fn:Roy White end:vcard --------------1DC00080AA50FF8DECD1B4F1--Article: 31273
Now we all know, but I told Roy already to tone down the commercials. Let's keep this ng friendly and informative, without blatant propaganda. Peter AlfkeArticle: 31274
Latest version is indeed 2.2. But check on the PCI website to see if there are updates expected. (Not that it will really matter for your system...) Tom > You all quote the PCI spec - whats the latest vertion? 2.2? > > PS: this is for an embeded system with plug in cards.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z