Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In article <3D15741C.D46FC639@yahoo.com>, rickman <spamgoeshere4@yahoo.com> wrote: >That may be true in some context, but the benchmarks I have read >indicate that the Athlons in standard desktops run quite well beating >P3s at 20% higher clock speeds. The advantage over P4s is even more. >But laptops are where they seem to be filling the low end and the chip >sets are exclusively designed for that. Also, remember that the P4 started out with some REALLY REALLY crappy chipsets, either RDRAM (horribly expensive) or poor and therefore slow SDRAM implementations. Which did effect things. I hope Hammer comes out at speed, that will be a great foundation for CAD machines, especially since the memory-controller on die should help memory latency (massive L1 and L2, 30 cycles plus access time to get to main memory). >Of course laptops will not match desktops. But a new laptop is much >better than the desktop I am currently using. I am just trying to get >information on which processor is currently best for FPGA work in >laptops. I can do quite well without a desktop for the time being. You might want to reconsider, simply because the desktops are SO much cheaper. You can build yourself a really nice CAD machine for the desktop for a low cost, esp if you reuse your monitor. As for laptops, my GUESS would be a PIII laptop would be better than a P4, because the PIII has a better L1 cache, runs faster on a clock-per-clock basis (so better on a power basis). I'd THINK that simulated annealing would be memory and integer bound (but you could write it so that it is memory and FP bound, depending on how you do the cost function). Anyone at Brand X or Brand A care to comment? >Can you cite a source for this info? I have never heard that P4s won't >run at full speed for long periods. This may be true in laptops, I >don't doubt, but I can't belive they would design a desktop that won't >crunch at top clock all day. The P4 has a clock throttling thermal diode. In case of overheat, it cuts the clock in half. Early systems had poorly implemented cooling, so you could trigger it during normal use on the early desktops. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 44551
Marcel wrote: > Hi, > > I`m using Xilinx webpack and am quite new to VHDL, at the marked line I have > to place three times end if, > otherwise a syntag error is generated. > > Why do I have to place 3 times end if here instead 1 end if ? > > Any ideas ? > > > > process( SEL, D ) > begin > if ( SEL = "0000" ) then > TCK <= D(0); > else if ( SEL = "0001" ) then "elsif" instead of "else if". -- My real email is akamail.com@dclark (or something like that).Article: 44552
> a) using FPGA Editor (or similar tool) add probe points to the > design and track the bad logic signals back to the source. > If the failure is in routing (most likely), then you will > probably observe that one part of the route has the correct > signal, another part of the route does not. If you suspect > a net, try rerouting only that net. If that fixes the issue, > the problem is on that net. I read this yesterday and using the FPGA editor to probe a signal or two in an FPGA seems like a really good idea to me. I have to admit that I have never used the FPGA editor before, so my question is based on lack of knowledge mostly. This morning, I took my current design, got it into FPGA editor, selected a net and clicked "probes". At this point, I can see a series of messages on the bottom of my screen (I'm using ISE 4.2i), saying probe routed to one pad after another. It looks like a probe was either routed to half a dozen different pads one at a time trying to find the fastest route out of the chip, or it routed the same signal to half a dozen different pads. I dont know yet. Basically, my question is, "How can I route a probe to a particular pad (one of 8 or so I have set aside for test points that go to a Mictor connector in my design)?". Being a little new at FPGA editor, a verbose, step by step answer would be greatly appreciated. Charles KrinkeArticle: 44553
Nicholas Weaver wrote: > > In article <3D15741C.D46FC639@yahoo.com>, > rickman <spamgoeshere4@yahoo.com> wrote: > >That may be true in some context, but the benchmarks I have read > >indicate that the Athlons in standard desktops run quite well beating > >P3s at 20% higher clock speeds. The advantage over P4s is even more. > >But laptops are where they seem to be filling the low end and the chip > >sets are exclusively designed for that. > > Also, remember that the P4 started out with some REALLY REALLY crappy > chipsets, either RDRAM (horribly expensive) or poor and therefore slow > SDRAM implementations. Which did effect things. What did it affect? I am not following you here. > >Of course laptops will not match desktops. But a new laptop is much > >better than the desktop I am currently using. I am just trying to get > >information on which processor is currently best for FPGA work in > >laptops. I can do quite well without a desktop for the time being. > > You might want to reconsider, simply because the desktops are SO much > cheaper. You can build yourself a really nice CAD machine for the > desktop for a low cost, esp if you reuse your monitor. I think I know my needs better than most other people know my needs. A desktop is nice, but I get really tired of packing it up when I have to drive to my other work locations. Whew, BIG box. I can't even get the 19" monitor box in my seat, it has to go in the back of the pickup! > As for laptops, my GUESS would be a PIII laptop would be better than a > P4, because the PIII has a better L1 cache, runs faster on a > clock-per-clock basis (so better on a power basis). This is one of those situations where having real data far surpasses armchair analysis. All of the laptop tests I have seen shows avaiable P4s outrunning available P3s and lasting just as long if not longer on batteries. They may be using larger batteries or they may have better power control. Don't know, don't care. > >Can you cite a source for this info? I have never heard that P4s won't > >run at full speed for long periods. This may be true in laptops, I > >don't doubt, but I can't belive they would design a desktop that won't > >crunch at top clock all day. > > The P4 has a clock throttling thermal diode. In case of overheat, it > cuts the clock in half. Early systems had poorly implemented cooling, > so you could trigger it during normal use on the early desktops. But you don't have any data or reports showing that this is a common problem? Do you know any brand names or models? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 44554
Look up the syn_reference_clock attribute in the manual. This can be used to do exactly what you want. It will be something like define_clock clk50 -period 20.0 -virtual define_attribute {find -reg -enable en50} syn_reference_clock clk50 If you could back your counter up one state and add a pipeline stage for the enable, that would help a lot in making timing on the CE pins. Ghys wrote: > Hi, > > I have a design that runs at 50 MHz, with a system clock of 250 MHz, and a > clock enable of 50 MHz, on a Xilinx Virtex 2 xc6000. > > The clock enable is generated internally with a simple counter at 250 MHz > that generates a pulse every 5 clock cycle. This pulse is distributed to 150 > flip flop's CE input. > > How do I tell Synplify 7.1 Pro this situation : > > 1 - The major part of the design logic runs only at 50 MHz, even though the > system clock is at 250 MHz. > 2 - The pulse generated from the counter only has a system clock period (4 > ns) to reach every flip flop CE input. > > knowing that Synplify will replicate this net (and therefore renaming the > nets) to reduce fanout. > > Thanks, > > Ghyslain Gagnon > > >Article: 44555
I have a design where I am bringing in a 66 Mhz oscillator and dividing by two to get 33 Mhz for my PCI interface. The reason I am doing this is that I was taught that clock symmetry is better when the oscillator is brought into the logic and divided by two. But what I seem to see, is that this concept confuses timing user constraints and the ISE tools. It seems that the tools now consider the net where CLK/2 is sourced to not be a global net and further, the options of making timing constraints are non-existent from the windows gui editor. I suppose that I can define a TNM_NET, but perhaps the proper thing is to use a 33Mhz oscillator instead and give up on my conception that oscillator/2 is the way to go for an FPGA design. I woud appreciate any comments on this. CharlesArticle: 44556
The counter is already pipelined. Does the syn_reference_clock garantee that the maximum delay on the clock enable signal is one high-frequency period (4 ns) - setup & hold time ? "Ken McElvain" <ken@synplicity.com> wrote in message news:3D161043.4070204@synplicity.com... > Look up the syn_reference_clock attribute in the manual. This can > be used to do exactly what you want. > > It will be something like > > define_clock clk50 -period 20.0 -virtual > define_attribute {find -reg -enable en50} syn_reference_clock clk50 > > If you could back your counter up one state and add a pipeline > stage for the enable, that would help a lot in making timing on the > CE pins. > > Ghys wrote: > > > Hi, > > > > I have a design that runs at 50 MHz, with a system clock of 250 MHz, and a > > clock enable of 50 MHz, on a Xilinx Virtex 2 xc6000. > > > > The clock enable is generated internally with a simple counter at 250 MHz > > that generates a pulse every 5 clock cycle. This pulse is distributed to 150 > > flip flop's CE input. > > > > How do I tell Synplify 7.1 Pro this situation : > > > > 1 - The major part of the design logic runs only at 50 MHz, even though the > > system clock is at 250 MHz. > > 2 - The pulse generated from the counter only has a system clock period (4 > > ns) to reach every flip flop CE input. > > > > knowing that Synplify will replicate this net (and therefore renaming the > > nets) to reduce fanout. > > > > Thanks, > > > > Ghyslain Gagnon > > > > > > >Article: 44557
rickman <spamgoeshere4@yahoo.com> writes: > Nicholas Weaver wrote: > > > > rickman <spamgoeshere4@yahoo.com> wrote: > > > > >Can you cite a source for this info? Tests run by my office mate, just before I moved to this office im Mai (so I do not know the details, just the result: sci/eng get Athlons). > > I have never heard that P4s won't > > >run at full speed for long periods. This may be true in laptops, I It definitely was desktop machines. The only notebooks in out setup are students or professors private machines. All computer center stuff is desktop or server. > > >don't doubt, but I can't belive they would design a desktop that won't > > >crunch at top clock all day. The largest part of Intels users will be running Office, that does not crunch numbers for long times. > > The P4 has a clock throttling thermal diode. In case of overheat, it > > cuts the clock in half. Early systems had poorly implemented cooling, > > so you could trigger it during normal use on the early desktops. And first generation P4s get/got awfully hot. Them GHz come with lots of current usage. > But you don't have any data or reports showing that this is a common > problem? Do you know any brand names or models? Tests were on whatever demo/test systems our standard vendor (http://www.dalco.ch/index.php) delivered for that test series. And they are specialised into delivering machines for sci/eng, so these were most likely not the cheapest PC crap. They also delivered our 502 processor Beowulf (http://www.asgard.ethz.ch/). -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Roleplayer - Make your code truely free: put it into the public domainArticle: 44558
"rickman" <spamgoeshere4@yahoo.com> schrieb im Newsbeitrag news:3D14ABCC.88D61C29@yahoo.com... > 113 MB!!! That will take about 13 hours on my connection, assuming that > it completes which it often won't with such a large file. The last time > I did this it took nearly a week of trying to get it to download. Have a look at mozilla. Its a nice powerfull download tool, which can resume broken connection when downloading file, so you wont have to start al over again when it chrases @ 99%. > Why the heck doesn't Xilinx make it available on CD for $10 or so. > Netscape does it. What's the problem? Are they concerned that users > will expect to get support for their $10? Hmm, good question. -- MfG FalkArticle: 44559
"Ghys" <gagnon77@hotmail.com> wrote in message news:<SIoR8.268$Wm5.95145@wagner.videotron.net>... > The counter is already pipelined. > Does the syn_reference_clock garantee that the maximum delay on the clock > enable signal is one high-frequency period (4 ns) - setup & hold time ? I believe the reference to the added pipeline stage was to back-up the counter decode one or more cycles, where it could be re-registered (pipelined) to help the fanout issues. For example, if the pulse is generated on count number 4, if you could generate it on count number 3, you could take that pulse and register it into perhaps 5 other flip-flops in parallel, to reduce your critical path clock enable fanout by a factor of 5. The change in code to account for the new clock enable distribution might be a pain, but is probably worth the effort. If sounds like you have only one clock domain (250 MHz), so I believe the tools will understand the 4 ns timing constraint as specified in your constraints file. You might investigate the use of multicycle path constaints to relax datapaths that do not have to propagate at 4 nS. I noticed in a VirtexE design, some portion of the Xilinx Tool Set (FPGA Express?) used a gbuf to route a synchronous reset sigal to a whole slew of flip-flops. I don't believe Virtex allowed this. If you have a gbuf to burn, and VirtexII allows routing of global lines to non-clock CLB inputs, and the global routing delay is fast enough, you might try instantiating a gbuf to route the Clock Enable, but probably only as a last resort. NewmanArticle: 44560
The path you mention should be from a flip-flop clocked by a 250 MHz clock and will end at a flip-flop clocked by the 50 MHz clock. Synplify will compute the closest approach of the clocks for endpoints of the path, which will be 4ns and use that for the constraint. Ghys wrote: > The counter is already pipelined. > Does the syn_reference_clock garantee that the maximum delay on the clock > enable signal is one high-frequency period (4 ns) - setup & hold time ? > > > > "Ken McElvain" <ken@synplicity.com> wrote in message > news:3D161043.4070204@synplicity.com... > >>Look up the syn_reference_clock attribute in the manual. This can >>be used to do exactly what you want. >> >>It will be something like >> >>define_clock clk50 -period 20.0 -virtual >>define_attribute {find -reg -enable en50} syn_reference_clock clk50 >> >>If you could back your counter up one state and add a pipeline >>stage for the enable, that would help a lot in making timing on the >>CE pins. >> >>Ghys wrote: >> >> >>>Hi, >>> >>>I have a design that runs at 50 MHz, with a system clock of 250 MHz, and >>> > a > >>>clock enable of 50 MHz, on a Xilinx Virtex 2 xc6000. >>> >>>The clock enable is generated internally with a simple counter at 250 >>> > MHz > >>>that generates a pulse every 5 clock cycle. This pulse is distributed to >>> > 150 > >>>flip flop's CE input. >>> >>>How do I tell Synplify 7.1 Pro this situation : >>> >>>1 - The major part of the design logic runs only at 50 MHz, even though >>> > the > >>>system clock is at 250 MHz. >>>2 - The pulse generated from the counter only has a system clock period >>> > (4 > >>>ns) to reach every flip flop CE input. >>> >>>knowing that Synplify will replicate this net (and therefore renaming >>> > the > >>>nets) to reduce fanout. >>> >>>Thanks, >>> >>>Ghyslain Gagnon >>> >>> >>> >>> > >Article: 44561
newman wrote: > "Ghys" <gagnon77@hotmail.com> wrote in message news:<SIoR8.268$Wm5.95145@wagner.videotron.net>... > >>The counter is already pipelined. >>Does the syn_reference_clock garantee that the maximum delay on the clock >>enable signal is one high-frequency period (4 ns) - setup & hold time ? >> > > I believe the reference to the added pipeline stage was to back-up the > counter decode one or more cycles, where it could be re-registered > (pipelined) to help the fanout issues. > > For example, if the pulse is generated on count number 4, if you could > generate it on count number 3, you could take that pulse and register > it into perhaps 5 other flip-flops in parallel, to reduce your > critical path clock enable fanout by a factor of 5. The change in > code to account for the new clock enable distribution might be a pain, > but is probably worth the effort. > > If sounds like you have only one clock domain (250 MHz), so I believe > the tools will understand the 4 ns timing constraint as specified in > your constraints file. You might investigate the use of multicycle > path constaints to relax datapaths that do not have to propagate at 4 > nS. The syn_reference_clock attribute takes care of this. Paths between enabled flops will have 20ns because they are "clocked" by the virtual clock clk50. > > I noticed in a VirtexE design, some portion of the Xilinx Tool Set > (FPGA Express?) used a gbuf to route a synchronous reset sigal to a > whole slew of flip-flops. I don't believe Virtex allowed this. If you > have a gbuf to burn, and VirtexII allows routing of global lines to > non-clock CLB inputs, and the global routing delay is fast enough, you > might try instantiating a gbuf to route the Clock Enable, but probably > only as a last resort. > > > Newman >Article: 44562
In a previous post, I mentioned something about a gbuf driving a synchronous reset in a VirtexE device. It was actually an asynchronous reset driven by a flip-flop in the same clock domain that got routed/synthesized by the tools via a gbuf. I was in a hurry, and still had 2 spare gbufs, so I did not investigate this phenomenon as much as I would have liked to, but it worked like a champ. NewmanArticle: 44563
This is covered in the FAQ: http://www.fpga-faq.com/FAQ_Pages/0031_How_to_initialize_Block_RAM.htm On Tue, 18 Jun 2002 05:17:33 -0800, hull <hullhull@sina.com> wrote: >I can only use defparam to initial a block ram in the behavior >simulation and the clause of defparam can not be synthesized.In >the language template, before the defparam there are a sentence >of "//synthesis translate_off" and after the defparam there >are "//synthesis translate_on".Does this mean I can not use >this method to form a rom? But the ISE data sheet says I can >do so.I am puzzled and thanks for answering. =================== Philip Freidin philip@fliptronics.com Host for WWW.FPGA-FAQ.COMArticle: 44564
I'm not yet a PCI designer, but my recollections of the interface make me ask: Do you need the symmetry? If you don't require the falling edge, life is better. If you need "both edges" of the 33MHz clock, you can use a clk enable to engage the appropriate registers. The constraints are sometimes troublesome, but the "multi-cycle path" constraint is supported in most (all?) timing sensitive synthesizers as well as the Xilinx back-end tools. No divide by two physical clock net required. cfk wrote: > > I have a design where I am bringing in a 66 Mhz oscillator and dividing by > two to get 33 Mhz for my PCI interface. The reason I am doing this is that I > was taught that clock symmetry is better when the oscillator is brought into > the logic and divided by two. But what I seem to see, is that this concept > confuses timing user constraints and the ISE tools. It seems that the tools > now consider the net where CLK/2 is sourced to not be a global net and > further, the options of making timing constraints are non-existent from the > windows gui editor. I suppose that I can define a TNM_NET, but perhaps the > proper thing is to use a 33Mhz oscillator instead and give up on my > conception that oscillator/2 is the way to go for an FPGA design. I woud > appreciate any comments on this. > > CharlesArticle: 44565
In article <3D161031.480B750C@yahoo.com>, rickman <spamgoeshere4@yahoo.com> wrote: >> Also, remember that the P4 started out with some REALLY REALLY crappy >> chipsets, either RDRAM (horribly expensive) or poor and therefore slow >> SDRAM implementations. Which did effect things. > >What did it affect? I am not following you here. It meant that the P4 initially had horrible memory performance compared to the DDR athlons at a reasponable price point (as RDRAM is expensive), as only SDRAM was offered in the Intel chipsets, and even that wasn't great. >> You might want to reconsider, simply because the desktops are SO much >> cheaper. You can build yourself a really nice CAD machine for the >> desktop for a low cost, esp if you reuse your monitor. > >I think I know my needs better than most other people know my needs. A >desktop is nice, but I get really tired of packing it up when I have to >drive to my other work locations. Whew, BIG box. I can't even get the >19" monitor box in my seat, it has to go in the back of the pickup! If your real goal is a mobile desktop, why not look at a Shuttle all-in-one system. You can get a P4 or Athlon, in a small aluminum cube, (SS50 is the P4, SS40 is the Athlon) and a LCD display, pack it up in a backpack setup and you have a mobile desktop, with some real CPU power in it, which should be in the <20 lbs and highly packable range. You lose about 20% memory bandwidth due to shared video but you do a lot better than a laptop, as you could have a >2.0 GHz P4 with DDR-DRAM. >> The P4 has a clock throttling thermal diode. In case of overheat, it >> cuts the clock in half. Early systems had poorly implemented cooling, >> so you could trigger it during normal use on the early desktops. > >But you don't have any data or reports showing that this is a common >problem? Do you know any brand names or models? There was some debate on the subject when the P4 first came out. Tom's Hardware shows what happens when you de-heatsink it. It tends to work fine (no slowdown) even in small cases (EG, SV50 case), at least for the desktop machines. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 44566
Nicholas Weaver wrote: > > In article <3D161031.480B750C@yahoo.com>, > rickman <spamgoeshere4@yahoo.com> wrote: > >> Also, remember that the P4 started out with some REALLY REALLY crappy > >> chipsets, either RDRAM (horribly expensive) or poor and therefore slow > >> SDRAM implementations. Which did effect things. > > > >What did it affect? I am not following you here. > > It meant that the P4 initially had horrible memory performance > compared to the DDR athlons at a reasponable price point (as RDRAM is > expensive), as only SDRAM was offered in the Intel chipsets, and even > that wasn't great. I don't understand why you are talking about this. Perhaps we are having two different conversations. I am interested in today's computers and most specifically laptops. I don't see how the memory used in a computer a year or more ago is of interest to either of us. > >> You might want to reconsider, simply because the desktops are SO much > >> cheaper. You can build yourself a really nice CAD machine for the > >> desktop for a low cost, esp if you reuse your monitor. > > > >I think I know my needs better than most other people know my needs. A > >desktop is nice, but I get really tired of packing it up when I have to > >drive to my other work locations. Whew, BIG box. I can't even get the > >19" monitor box in my seat, it has to go in the back of the pickup! > > If your real goal is a mobile desktop, why not look at a Shuttle > all-in-one system. You can get a P4 or Athlon, in a small aluminum > cube, (SS50 is the P4, SS40 is the Athlon) and a LCD display, pack it > up in a backpack setup and you have a mobile desktop, with some real > CPU power in it, which should be in the <20 lbs and highly packable > range. You lose about 20% memory bandwidth due to shared video but > you do a lot better than a laptop, as you could have a >2.0 GHz P4 > with DDR-DRAM. I would never buy a machine with shared video memory because 20% performance loss is not worth the small price savings realized. I also am not interested in a cube that is not functional until you find a power outlet and a desk to set the keyboard, monitor and ect. Since you don't know how fast current laptops are, how can you say that a desktop with performance limitations is better? BTW, I can buy a mobile P4 to run at 2.0 GHz with DDR in a laptop. I am not asking for you to rethink my basic decisions. I am asking about performance of various laptops to use for FPGA design. The laptop CPUs I can buy today are about as good as the desktop CPUs I could not have bought until 6 months ago, or maybe even less. I am willing to pay the price premium to gain the portability and ease of setup. > >> The P4 has a clock throttling thermal diode. In case of overheat, it > >> cuts the clock in half. Early systems had poorly implemented cooling, > >> so you could trigger it during normal use on the early desktops. > > > >But you don't have any data or reports showing that this is a common > >problem? Do you know any brand names or models? > > There was some debate on the subject when the P4 first came out. Tom's > Hardware shows what happens when you de-heatsink it. It tends to work > fine (no slowdown) even in small cases (EG, SV50 case), at least for > the desktop machines. Yes, and all of that data has little bearing on the real point. If I buy machine X, will it have a heat problem when run at full speed, that is will it self limit due to heat? I will be asking that question as I make my selection. But I don't see how a thermal protection mechanism makes a chip unsuitable for computation use. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 44567
Neil Franklin wrote: > > rickman <spamgoeshere4@yahoo.com> writes: > > > Nicholas Weaver wrote: > > > > > > rickman <spamgoeshere4@yahoo.com> wrote: > > > > > > >Can you cite a source for this info? > > Tests run by my office mate, just before I moved to this office im Mai > (so I do not know the details, just the result: sci/eng get Athlons). But if we don't have any info on how the tests were done, then we don't know under what conditions the results will change. Both Athlons and P4s have been changing over the last year. I also do not understand exactly how the CPU heats up as you do "computation". Have they changed the OS so that the CPU stops when you don't give it something to do? Does the clock slow down when running the IDLE task? I suppose that memory accesses are reduced since you should get more cache hits, but I don't see how the CPU heat will change significantly. > > > >don't doubt, but I can't belive they would design a desktop that won't > > > >crunch at top clock all day. > > The largest part of Intels users will be running Office, that does not > crunch numbers for long times. But even if 1% of the machines are running workhorse code, it would be immediately recognized if they were slower than other machines with much lower clock ratings. In my last position we had a bank of servers to crunch FPGA work and we would have known there was a problem. We had one P4 which was used by the group designing in the largest FPGAs we were using. We did not see a problem and this 1 GHz machine ran faster than the 750 MHz P3s. I can't belive Intel is making a chip or that DELL, etc are making machines that slow down running "normal" FPGA apps. > > > The P4 has a clock throttling thermal diode. In case of overheat, it > > > cuts the clock in half. Early systems had poorly implemented cooling, > > > so you could trigger it during normal use on the early desktops. > > And first generation P4s get/got awfully hot. Them GHz come with lots > of current usage. We are no longer buying first gen P4s. They are up to 2.5 GHz or so and have reduced geometries with lower power demands. I will check with some former coworkers and see if they have a newer, faster P4 server now. > > But you don't have any data or reports showing that this is a common > > problem? Do you know any brand names or models? > > Tests were on whatever demo/test systems our standard vendor > (http://www.dalco.ch/index.php) delivered for that test series. And > they are specialised into delivering machines for sci/eng, so these > were most likely not the cheapest PC crap. They also delivered our > 502 processor Beowulf (http://www.asgard.ethz.ch/). So you don't have any specific data on what machine type had what problem, just general recommendations? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 44568
Falk Brunner wrote: > > "rickman" <spamgoeshere4@yahoo.com> schrieb im Newsbeitrag > news:3D14ABCC.88D61C29@yahoo.com... > > 113 MB!!! That will take about 13 hours on my connection, assuming that > > it completes which it often won't with such a large file. The last time > > I did this it took nearly a week of trying to get it to download. > > Have a look at mozilla. Its a nice powerfull download tool, which can resume > broken connection when downloading file, so you wont have to start al over > again when it chrases @ 99%. > > > Why the heck doesn't Xilinx make it available on CD for $10 or so. > > Netscape does it. What's the problem? Are they concerned that users > > will expect to get support for their $10? > > Hmm, good question. Thanks for the advice Falk. But Netscape also has that capability. But it is not perfect and even if it works right, it can take several days to get the thing to download when it keeps crapping out. Anything over about 20 MB is just a PITA when you are running on such a slow link as a dial up modem. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 44569
Duane Clark wrote: > > Marcel wrote: > > Hi, > > > > I`m using Xilinx webpack and am quite new to VHDL, at the marked line I have > > to place three times end if, > > otherwise a syntag error is generated. > > > > Why do I have to place 3 times end if here instead 1 end if ? > > > > Any ideas ? > > > > > > > > process( SEL, D ) > > begin > > if ( SEL = "0000" ) then > > TCK <= D(0); > > else if ( SEL = "0001" ) then > > "elsif" instead of "else if". Marcel, Now you are supposed to say, "never mind"... :) -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 44570
> I have a design where I am bringing in a 66 Mhz oscillator and dividing by > two to get 33 Mhz for my PCI interface. The reason I am doing this is that I > was taught that clock symmetry is better when the oscillator is brought into > the logic and divided by two. But what I seem to see, is that this concept > confuses timing user constraints and the ISE tools. It seems that the tools > now consider the net where CLK/2 is sourced to not be a global net and > further, the options of making timing constraints are non-existent from the > windows gui editor. > Charles why dont u directly use pci's 33MHz clock, which u will get on pci bus? else why dont u use external clock divider? i guess even in ISE u can define the internal global buffer (like IBUF) for u'r 33MHz clock... ---sushantArticle: 44571
Thanks very much your help and suggestions. I will definitely look into upgrading and also experimenting! Thanks again Ryan "Wolfgang Loewer" <wolfgang.loewer@elca.de> wrote in message news:aevea3$ag9$07$1@news.t-online.com... > Hi, > > you should switch to NIOS V2.0 or V2.1, the board is the same, you just need > the new SW. The newer versions have full SOPC builder support for > integrating multiple NIOS processors into a single system. Check out > application note 184 which is available on the Altera website. It discusses > the implementation of multiple masters into a NIOS system. These multiple > masters can also be multiple NIOS processors. For each processor you get > your own directory structure for SW development. With some mouse clicks you > can add multiple processors and specify what slaves they should share. Slave > side arbitration gets automatically built in for those slaves that can be > accessed by multiple masters and the arbitration scheme is configurable. > Furhtermore each master gets it's own Avalon bus segment and so mutliple > masters can simultaneously talk to different slaves. Arbitration only takes > place if multiple masters try to access the same slave in the same cycle. > When using V2.0 or newer and once you understood the concept it's really > just a question of minutes to put together a complex system with multiple > processors. > > For your application you could have one master CPU that runs from external > memory. The slave CPUs could run from on-chip RAM or ROM. You could > implement a common on-chip memory that can be accessed from all CPUs in the > system and that's being used to exchange data between the CPUs. > Alternatively you could also make the data memories from the slave CPUs > accessible to the master. The possible combinations are infinite and what > you described is certainly possible. V2.0 and newer supports a multi master > avalon bus structure and so it's a lot easier to implement than with 1.1 or > 1.1.1. > > Regards > Wolfgang > http://www.elca.de > > "Ryan" <ryans@cat.co.za> schrieb im Newsbeitrag > news:3d12031a.0@obiwan.eastcoast.co.za... > > Hi > > > > I am working with an Altera Excalibur Nios development board version 1.1. > Is > > it possible to configure the PLD (20K200) with more than 1 Nios cpu? What > I > > would like to do is configure 1 cpu to be a master and the others as > slaves. > > The master will then control the processess executed on the slaves etc. > > Ideally the slaves will not have to make use of external SRAM of FLASH. Is > > this concept possible and how can it be implemented? If not, have you any > > suggestions on what other route to consider? > > > > Thanks > > Ryan > > > > > >Article: 44572
Hello Community! Within Agilentīs Advanced Design System it is possible to generate VHDL or Verilog Code. Does anyone have experience with this procedure and using the code in ISE? Thank you, Thomas PS: Please answer per Email too, if possibleArticle: 44573
Hi, I m a beginer on xilinx fpga, i used to work with altera fpga. I'm trying to compare, both embedded processor solution for a typical application, for choosing the best one (speed of calculations). So, i try to run the Xilinx's microblaze examples on the memec spartan2 demo board. When i try the interrupt_controller examples, i can't send data (number) with the windows "hyper terminal" ( i configure all my com port: 19200bps,8bits,no parity,1 stop bit,noflow control). There'snt number echoes on screen. The program run, but i can change the frequency of LED rotation. Thanks.Article: 44574
Hello all, I am a newbie in electronics...and I have a quite big problem. I am in urgent need of cloning a TIBPAL20L8 device I have on a custom board, whose author unfortunately passed away. I would like to have a spare before the irreparable occurs and I have to kiss the board goodbye for good. How should I proceed ? Is there any free PAL/GAL programmer available on the net that could help me reading the contents of the 20L8 and then burn it on another PAL or a compatible GAL ( 20V8 comes to my mind ) ? If I can't read the contents of the PAL, I have the schematics of the PCB where this device is placed, but I don't know if this is sufficient to understand and trace the equations of the PAL device. Thanks in advance. Riccardo
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z