Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Extract from "The RocketIO=99 Transceiver User Guide UG024 (v2.5) December 9, 2004" "PCB Design Requirements (Page 109) To operate properly, the RocketIO transceiver requires a certain level of noise isolation from surrounding noise sources. For this reason, it is required that both dedicated voltage regulators and passive high-frequency filtering be used to power the RocketIO circuitry." If you dont use the RIO's you still have to supply power, but you can use the VCCAUX supply in this case. Hope this helps clarify the situation JasonArticle: 82526
Hello Austin. Thanks for your reply. Austin Lesea wrote: > Pablo, > > Have you logged this into the hotline as a case? Best way to address > new software glitches is to report them. > > It might be a bug, it might be a new feature. > > Only way to find out is to look. You can help us just by logging it in. > > Same amount of work as this email posting, maybe less. > > Austin > Yes, I am planning to do that. But first I wanted to ask here in case I was doing something terribly dummy (you know, embarrassment-concerning replies tend to be faster here). I will use the 6.3 installed in another computer to gather enough evidence and submit a full case report. Thanks again. Cheers. -- PabloBleyerKocik pablo /"Reliable software must kill people reliably." @bleyer.org / -- Andy MickelArticle: 82527
Austin, are you saying that you've actually seen the circuit actually pass 3v across the switch ? Or are you saying that the output SHOULD be clamped around 2.3v ? Perhaps I should've been more clear, but I am currently testing a new ckt board built with the circuit in xapp646 and a virtex2 part. With the IDT vcc at 3.3v, the maximum voltage I am seeing with a scope is about 2.3v on the output side of the switch. IE., when the virtex2 part is driving (lvttl) one side of the switch at 3.3v, the other side of the switch is about 2.3v. When the 5v ttl part 74FCT645 is writing to the virtex2 part, I see 4v on that side of the switch and 2.3v on the virtex2 side. When I originally looked at xapp646, it stated that it would clamp to less than ~3v outputs, I didn't think it would be closer to 2.3v. If I had used the implementation in IDT's or TI's appnote where they drive the vcc pin at 4.3v using a diode from 5v, I wonder if that would have worked better. gja "Austin Lesea" <austin@xilinx.com> wrote in message news:d3jd3k$mb11@cliff.xsj.xilinx.com... > gja, > > Basically, I am using the fact that the IDT device is just a simple NMOS > transistor, and since I know how that works (physically) I am ignoring the > data sheet (as it is misleading in this case). > > I know that IDT does not support this from their data sheet > specifications, and they actually called me to tell me that they would not > support this. > > Odd. It works fine. They are sand-bagging their specifications like > crazy here, and their parts work far better than the data sheet implies > (in this circuit). > > Could be the loading (none), could be the voltages (less variation than > what they spec), could be they don't want to support the application. > Fine, call Xilinx. I'd much rather you call us than IDT. OK by me. We > have built it, used it, tested it, and are still doing so. I know a lot > of folks out there who have done likewise. Haven't heard a single > complaint. > > Officially, the PCI specification does not allow any devices to be placed > in series with a PCI compatible part. That is fine as well. > > Austin > > gja wrote: >> Austin, >> Maybe you can give me more insight to a problem I have with xapp646. The >> note states that "Since the device is a set of series-connected NMOS >> transistors, any voltage larger than a few hundred millivolts below the >> VCC pin voltage will be cut off." >> From reading the IDT appnotes and what I'm seeing on a circuit board, the >> output will always be limited to less than VCC-1. With VCC at 3.3v as >> shown in xapp646, under light loading, the output voltage is about 2.3v, >> and with a 10k load, it's closer to 2v which means essentially no noise >> margin for TTL. Look at figure 4 of >> http://www1.idt.com/pcms/tempDocs/AN_11.pdf or figure 5 of >> http://www1.idt.com/pcms/tempDocs/quickswitch_basics.pdf >> Do you think that I should be seeing around 2 to 2.3v output with the ckt >> shown in xapp646? >> >> Dr, take a look at TI's sn74cb3t3384 or sn74cbtd3384c as well as some >> appnotes on their site. >> >> >> gja >> >> "Austin Lesea" <austin@xilinx.com> wrote in message >> news:d3gogs$lr91@cliff.xsj.xilinx.com... >> >>>Dr, >>> >>>Spartan 2 will be around a long time. That we have demoted it from the >>>limelight is a marketing issue (just so much shelf space for the new >>>products to showcase). >>> >>>As you may be aware, we still provide the 3100A series of FPGAs, which >>>are still supporting designs done 15 years ago! >>> >>>We discontinue devices once they are not able to be manufactured and sold >>>economically. This means that there is little business, and the process >>>used to make the chips has become obsolete at the fabrication facilities. >>>We also may discontinue a particular part/package combination when that >>>package is running at extremely low volumes or becomes difficult to >>>procure. >>> >>>Since we are still making almost all of our FPGA products, I don't think >>>you have anything to worry about with Spartan II. >>> >>>The original Virtex, and Spartan II are a lot like classic Coca-Cola -- >>>they may never go away. >>> >>>However, the cost/function of newer devices is so much better than the >>>older devices, that you may want to consider designing with the latest >>>devices (at some point). >>> >>>The app notes we have published for 5V PCI details all of the tricks to >>>make the latest 90nm devices work on the 5V PCI bus. (Xapp 646, 311) >>> >>>I hope this helps, >>> >>>Austin >> >>Article: 82528
There are very few applications that do not benefit significantly from a cache. Even if the data side does not benefit dramatically from cache (like working mostly on large amounts of sequential data), the instruction side generally does. In many benchmarks running from external memory with cache achieves almost the same performance as running from LMB BRAMs. The first thing I would check is to make sure the cache is configured correctly. Make sure the address space of the cache includes the external memory you are caching (C_ICACHE_BASEADDR, C_ICACHE_HIGHADDR, C_DCACHE_BASEADDR, C_DCACHE_HIGHADDR). Make sure caches are turned on (C_USE_ICACHE, C_USE_DCACHE). The most common missing item is not turning caches on in software. microblaze_enable_icache(); microblaze_enable_dcache(); If you are using xmd and downloading multiple programs that turn on cache, you should wipe the cache before enabling the cache otherwise you could have cache data from the previous program. microblaze_init_icache_range(0, ICACHE_SIZE); microblaze_init_dcache_range(0, DCACHE_SIZE); As Shalin referred to, in MicroBlaze v3.00a and higher the Xilinx Cache Link (XCL) interface adds an optional dedicated cache interface and uses 4 word cache lines to improve performance. You may want to look into using XCL and mch_opb_sdram. The mch_opb_sdram is new in EDK 7.1. This e-mail is my own opinion, and not an official Xilinx e-mail. Reverse domain and remove the NOSPAM from e-mail address to respond by e-mail. v_mirgorodsky@yahoo.com wrote: > Hi, ALL! > > Recently one of my friends faced very strange problem. He had the > MicroBlaze CPU in his design running with 50MHz clock speed. He also > had external SDRAM module and his application was executing out of > external SDRAM memory. During first few benchmark tests he realized > that it takes about 24 clock cycles to access memory :( This means > that cool embedded 50MHz MicroBlaze CPU runs slower than poor external > 8MHz AVR. After my advice he enabled the cache within MicroBlaze, but > application execution speed did not increased significantly. > > As he described later, this was one of hand-on samples from EDK. May be > the sample is not optimized for performance and very simplified, but > net performance of 2MHz processor is not even close to advertised by > Xilinx :( > > Could any one give any comment on that? > > Regards, > Vladimir S. MirgorodskyArticle: 82529
There are plenty of issues in 7.1 SP1. Handle with care! Alfred "Pablo Bleyer Kocik" <pablo.N@SPAM.bleyer.org> schrieb im Newsbeitrag news:d3k0an$15r$1@domitilla.aioe.org... > > Hello Austin. Thanks for your reply. > > Austin Lesea wrote: > > Pablo, > > > > Have you logged this into the hotline as a case? Best way to address > > new software glitches is to report them. > > > > It might be a bug, it might be a new feature. > > > > Only way to find out is to look. You can help us just by logging it in. > > > > Same amount of work as this email posting, maybe less. > > > > Austin > > > > Yes, I am planning to do that. But first I wanted to ask here in case I was > doing something terribly dummy (you know, embarrassment-concerning replies tend > to be faster here). I will use the 6.3 installed in another computer to gather > enough evidence and submit a full case report. > > Thanks again. Cheers. > > -- > PabloBleyerKocik > pablo /"Reliable software must kill people reliably." > @bleyer.org / -- Andy MickelArticle: 82530
gja, See below, Austin gja wrote: > Austin, are you saying that you've actually seen the circuit actually pass > 3v across the switch ? Yes, I have. Or are you saying that the output SHOULD be clamped > around 2.3v ? No, I am not. > > Perhaps I should've been more clear, but I am currently testing a new ckt > board built with the circuit in xapp646 and a virtex2 part. > With the IDT vcc at 3.3v, the maximum voltage I am seeing with a scope is > about 2.3v on the output side of the switch. IE., when the virtex2 part is > driving (lvttl) one side of the switch at 3.3v, the other side of the switch > is about 2.3v. Into what load? On a "real" PCI bus is where I made the measurements. Perhaps if you use the resistor "standard termionation" (which is anything but a standard) you will get some other result? When the 5v ttl part 74FCT645 is writing to the virtex2 part, > I see 4v on that side of the switch and 2.3v on the virtex2 side. OK. I will admit that the TI part has some advantages, but it is also an active device, and adds delay, doesn't it? When I > originally looked at xapp646, it stated that it would clamp to less than ~3v > outputs, I didn't think it would be closer to 2.3v. If I had used the > implementation in IDT's or TI's appnote where they drive the vcc pin at > 4.3v using a diode from 5v, I wonder if that would have worked better. Except that then you do not limit the voltages to less than what we require.Article: 82531
This appears to be fixed in 7.1i the patch from this page: http://www.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=21168 WebPACK is a free downloadable ISE subset that now supports Linux. http://www.xilinx.com/xlnx/xebiz/designResources/ip_product_details.jsp?sGlobalNavPick=PRODUCTS&sSecondaryNavPick=Design+Tools&key=DS-ISE-WEBPACK Regards, ArthurArticle: 82532
Anthony Mahar wrote: > Hello, > > Is there a way to do performance monitoring on the PPC405 in the Virtex > II Pro? I am specifically interested in cache hits. > > I have wedged my own device between the CPU's instruction and data PLB > interfaces and can currently get cache misses. But I need to find a way > to determine cache hits of an application running under an operating > system. > > If it was stand alone I could figure that information out by the number > of load and store instructions, but this is an operating system with > context switches, interrupt handlers, etc. > > Is there a way to gather this information? There did not seem to be any > performance monitoring registers as seen with newer PowerPC and x86 > systems. Can the trace port be used to passively monitor execution for > load/store instructions? Unfortunately, I have few answers to your questions. However, I know of a research group in Georgia Tech that is designing/designed a memory access monitor, which sounds similar to yours. You may want to correspond with them to exchange notes. I learned of their monitor at the HPCA 2005 FPGA workshop. Here is a link to the workshop http//cag.csail.mit.edu/warfp2005/. A link to the workshop presentations is here at http//cag.csail.mit.edu/warfp2005/program.html. Their presentation was titled "Evaluating System wide Monitoring Capsule Design using Xilinx Virtex II Pro FPGA". Their paper has their contact information. As for the trace port, I have used it with a IBM/Agilent RISCWatch (RW) box, which collects a dynamic trace of the instructions over 8 million CPU cycles. The main limitation is that it only works for stand alone apps. When you have virtual memory enabled (while running Linux for instance), RW uses the TLB to conduct the virtual to physical address translations. This is great for regular code. However, when an interrupt is detected, the CPU converts to using physical addresses for the interrupt handler. Unfortunately, RW continues to use the TLB so it tries to translate physical addresses, for which no "translations" exists, so RW is unable to resolve interrupt handler instructions. After this point, the trace is corrupted. In any case, if you are interested in learning more about RW, you can refer to this appnote http//direct.xilinx.com/bvdocs/appnotes/xapp545.pdf. It has links to all manuals for the RW box and its tools. Lastly, for my own curiosity, how difficult was it to design and debug your monitor? The guy I spoke to from Georgia Tech at the workshop said they used Chipscope to learn the protocol (along with IBM's PLB spec). He claims that this was a painstaking process. NNArticle: 82533
Hi Austin, > OK. I will admit that the TI part has some advantages, but it is also > an active device, and adds delay, doesn't it? Yep. About 500ps worst-case. I tell my users to edit the constraint file so that because of this, all PCI signals should have 1ns subtracted from their timing constraints. This takes care of the combinatorial delays as well. In Quartus, the difference is unnoticeable with a 33MHz bus and usually adds a little compilation time with a 66MHz bus. I'd hope the same situation would apply to Xilinx users. Best regards, BenArticle: 82534
I looked into doing this a while back. From the sounds of it, you have already create a data side cache miss collection engine, now you need the number of total loads and stores. As you surmised, this info can be collected by the debug interface (note the debug interface is different than the trace interface: http://www.xilinx.com/ise/embedded/ppc405block_ref_guide.pdf) counted in a similar fashion as as you currently do for the misses. Except here you need to identify the ld/st from the other instructions but the decode is pretty straighforward. For CPI and instruction cache miss rate measurements, the same general technique can be used. You should check out Nju's xapp545 appnote for another method of collecting the trace data. You can learn a lot about what the code is actually doing by looking at 8Million-cycle dumps of instruction execution. The issue of OS context switches and interrupts is really orthogonal. You don't mention your OS but Oprofile (http://oprofile.sourceforge.net/) for Linux handles this by adding code to every context switch-causing event to collect the values of the counters--in this case the ones you've insterted between the PPC405 and PLB bus--and assign them to the currently running code. A similar approach is valid for other OSs but leveraging Oprofile is a good starting point since they've already figured out the relevant hooks into the kernel. Paul Anthony Mahar wrote: > > Hello, > > Is there a way to do performance monitoring on the PPC405 in the Virtex > II Pro? I am specifically interested in cache hits. > > I have wedged my own device between the CPU's instruction and data PLB > interfaces and can currently get cache misses. But I need to find a way > to determine cache hits of an application running under an operating > system. > > If it was stand alone I could figure that information out by the number > of load and store instructions, but this is an operating system with > context switches, interrupt handlers, etc. > > Is there a way to gather this information? There did not seem to be any > performance monitoring registers as seen with newer PowerPC and x86 > systems. Can the trace port be used to passively monitor execution for > load/store instructions? > > Thank you, > TonyArticle: 82535
Nju Njoroge wrote: > Anthony Mahar wrote: > >>Hello, >> >>Is there a way to do performance monitoring on the PPC405 in the > > Virtex > >>II Pro? I am specifically interested in cache hits. >> >>I have wedged my own device between the CPU's instruction and data > > PLB > >>interfaces and can currently get cache misses. But I need to find a > > way > >>to determine cache hits of an application running under an operating >>system. >> >>If it was stand alone I could figure that information out by the > > number > >>of load and store instructions, but this is an operating system with >>context switches, interrupt handlers, etc. >> >>Is there a way to gather this information? There did not seem to be > > any > >>performance monitoring registers as seen with newer PowerPC and x86 >>systems. Can the trace port be used to passively monitor execution > > for > >>load/store instructions? > > > Unfortunately, I have few answers to your questions. However, I know of > a research group in Georgia Tech that is designing/designed a memory > access monitor, which sounds similar to yours. You may want to > correspond with them to exchange notes. I learned of their monitor at > the HPCA 2005 FPGA workshop. Here is a link to the workshop > http//cag.csail.mit.edu/warfp2005/. A link to the workshop > presentations is here at > http//cag.csail.mit.edu/warfp2005/program.html. Their presentation was > titled "Evaluating System wide Monitoring Capsule Design using Xilinx > Virtex II Pro FPGA". Their paper has their contact information. > > As for the trace port, I have used it with a IBM/Agilent RISCWatch (RW) > box, which collects a dynamic trace of the instructions over 8 million > CPU cycles. The main limitation is that it only works for stand alone > apps. When you have virtual memory enabled (while running Linux for > instance), RW uses the TLB to conduct the virtual to physical address > translations. This is great for regular code. However, when an > interrupt is detected, the CPU converts to using physical addresses for > the interrupt handler. Unfortunately, RW continues to use the TLB so it > tries to translate physical addresses, for which no "translations" > exists, so RW is unable to resolve interrupt handler instructions. > After this point, the trace is corrupted. In any case, if you are > interested in learning more about RW, you can refer to this appnote > http//direct.xilinx.com/bvdocs/appnotes/xapp545.pdf. It has links to > all manuals for the RW box and its tools. > > Lastly, for my own curiosity, how difficult was it to design and debug > your monitor? The guy I spoke to from Georgia Tech at the workshop said > they used Chipscope to learn the protocol (along with IBM's PLB spec). > He claims that this was a painstaking process. > > NN > Thank you Nju, I am going to dig into those docs right now. My design was not intended to be a monitor, but an active bus transaction modifier. On certain transactions, I have to perform certain operations on the data going to the PPC405. This means I selectively pass data through, or perform some higher latency operations. Since I am currently interested in cache-miss performance, I only count the number of transaction requests from L1 cache. Because it is an individual word that caused the instruction miss, all other words retrieved in the transaction are, of course, not considered as a miss. This makes it extremely easy to monitor the number of transaction requests. While the module is an active component between the CPU and PLB, it is very easy to add a passive monitor once you have a way to have the EDK inject the monitor in the middle. For myself, It required some time to understand the EDK .mpd format and effectively create a PLB-PLB bridge (no logic, pure pass through), and there may be better ways with the "transparent" bus format that I haven't had time to look into. But at the time it was also my first EDK peripheral. As for 'learning' the PLB system, I found the IBM CoreConnect Bus Functional Model (BFM) for the PLB, with the PLB doc, to be instrumental in observing every kind of transaction I had to handle. I think the BFM would be far easier than using ChipScope/Docs alone. The BFM allows the generation of almost any kind of cycle-accurate PLB transaction a master and slave can use. One other model I would like to begin using is the Xilinx provided PPC405 swift model, which will allow the same code used by the real processor to run on the simulation swift model simulation. This will cause PLB transactions to occur in the same way they will on the real system, i.e. cache line fills based on the PPC405 MMU's state, etc. Regards, TonyArticle: 82536
Austin Lesea wrote: > Pablo, > > Have you logged this into the hotline as a case? Best way to address > new software glitches is to report them. Or, avoid them by using carefull regression testing before release ? > > It might be a bug, it might be a new feature. Wot No Smiley ? Please justify how a slower result, more slices, and an ultimate 'no fit' can possibly be called a 'new feature' ?! > > Only way to find out is to look. You can help us just by logging it in. > > Same amount of work as this email posting, maybe less. Well, certainly less bad publicity/egg-on-face for Xilinx ? -jg > Pablo Bleyer Kocik wrote: > >> Hello group. >> >> Regrettably, I just installed 7.1 in my computer in order to try the >> implementation of one of my designs in the SP3E and V4 chips. As a >> first test, >> I targeted it to the same XC3S100 device I was using in 6.3 in order >> to make >> some comparisons. After some minutes and to my dismay, I was surprised >> that my >> design doesn't fit in the XC3S100 any more! 6.3 synthesized the design >> perfectly in ~550 slices, but now 7.1 wants more than 900 slices for it. >> Implementation with the same constraints is also around 25MHz slower. >> I checked >> the synthesis report and the main difference is that MUXes went from >> 28 to 52 >> in a totally different arrangement: >> >> 6.3: >> # Multiplexers : 28 >> 12-bit 2-to-1 multiplexer : 1 >> 16-bit 2-to-1 multiplexer : 8 >> 16-bit 4-to-1 multiplexer : 2 >> 1-bit 2-to-1 multiplexer : 14 >> 4-bit 2-to-1 multiplexer : 2 >> 8-bit 2-to-1 multiplexer : 1 >> >> 7.1: >> # 1-bit 4-to-1 multiplexer : 16 >> # 1-bit 8-to-1 multiplexer : 2 >> # 16-bit 4-to-1 multiplexer : 27 >> # 16-bit 8-to-1 multiplexer : 4 >> # 3-bit 4-to-1 multiplexer : 1 >> # 4-bit 4-to-1 multiplexer : 2 >> >> I am also getting the following PACKER Warning I had never heard >> before: "Lut >> X driving carry Y can not be packed with the carry due to conflict >> with the >> common signal requirement between the LUT inputs and the Carry DI/MAND >> pins. >> This would result in an extra LUT for a feedthrough". Does somebody >> has an >> explanation for this? >> >> I can't believe things are so different. Maybe there are new >> "features" I am >> not aware of? >> >> Regards. >>Article: 82537
Nju Njoroge wrote: Interesting question for the "Monitoring Capsule Design" paper... they state they monitor behavior "between the CPU and L1 Dcache." Did they explain how they were able to do this, since the PPC405 and L1 are part of the same hard core? There would be interesting (positive) implications for my research if I could also inject myself between CPU and L1, instead of only between L1 and some instantiated L2 cache or memory bus. Thank you, Anthony > Anthony Mahar wrote: > >>Hello, >> >>Is there a way to do performance monitoring on the PPC405 in the > > Virtex > >>II Pro? I am specifically interested in cache hits. >> >>I have wedged my own device between the CPU's instruction and data > > PLB > >>interfaces and can currently get cache misses. But I need to find a > > way > >>to determine cache hits of an application running under an operating >>system. >> >>If it was stand alone I could figure that information out by the > > number > >>of load and store instructions, but this is an operating system with >>context switches, interrupt handlers, etc. >> >>Is there a way to gather this information? There did not seem to be > > any > >>performance monitoring registers as seen with newer PowerPC and x86 >>systems. Can the trace port be used to passively monitor execution > > for > >>load/store instructions? > > > Unfortunately, I have few answers to your questions. However, I know of > a research group in Georgia Tech that is designing/designed a memory > access monitor, which sounds similar to yours. You may want to > correspond with them to exchange notes. I learned of their monitor at > the HPCA 2005 FPGA workshop. Here is a link to the workshop > http//cag.csail.mit.edu/warfp2005/. A link to the workshop > presentations is here at > http//cag.csail.mit.edu/warfp2005/program.html. Their presentation was > titled "Evaluating System wide Monitoring Capsule Design using Xilinx > Virtex II Pro FPGA". Their paper has their contact information. > > As for the trace port, I have used it with a IBM/Agilent RISCWatch (RW) > box, which collects a dynamic trace of the instructions over 8 million > CPU cycles. The main limitation is that it only works for stand alone > apps. When you have virtual memory enabled (while running Linux for > instance), RW uses the TLB to conduct the virtual to physical address > translations. This is great for regular code. However, when an > interrupt is detected, the CPU converts to using physical addresses for > the interrupt handler. Unfortunately, RW continues to use the TLB so it > tries to translate physical addresses, for which no "translations" > exists, so RW is unable to resolve interrupt handler instructions. > After this point, the trace is corrupted. In any case, if you are > interested in learning more about RW, you can refer to this appnote > http//direct.xilinx.com/bvdocs/appnotes/xapp545.pdf. It has links to > all manuals for the RW box and its tools. > > Lastly, for my own curiosity, how difficult was it to design and debug > your monitor? The guy I spoke to from Georgia Tech at the workshop said > they used Chipscope to learn the protocol (along with IBM's PLB spec). > He claims that this was a painstaking process. > > NN >Article: 82538
I have a PhD student (Jihan Zhu) working in this area. There have been many implementations of NN's in FPGAs over the years. The first seems to have been Cox, C.E. and E. Blanz, GangLion - a fast field-programmable gate array implementation of a connectionist classifier. IEEE Journal of Solid-State Circuits, 1992. 28(3): p. 288-299. A very brief (4 page) survey appeared in FPL in 2003: J. Zhu and P. Sutton, "FPGA Implementations of Neural Networks - a Survey of a Decade of Progress," in Proceedings of 13th International Conference on Field Programmable Logic and Applications (FPL 2003), Lisbon, Sep 2003. It's online at http://www.itee.uq.edu.au/~peters/papers/zhu_sutton_fpl2003.pdf Peter "Peter Sommerfeld" <psommerfeld@gmail.com> wrote in message news:1113232756.948911.296830@o13g2000cwo.googlegroups.com... > Take a look at GenoByte, http://www.genobyte.com/cbm.html. > > News is old though, ca. 2000. > > -- Pete > > e wrote: >> Has anyone investigated implementimg neural nets in FPGAs? >Article: 82539
Austin, thank you for your responses. My replies are below: "Austin Lesea" <austin@xilinx.com> wrote in message news:d3k3jm$mav1@cliff.xsj.xilinx.com... > gja, > > See below, > > Austin > > gja wrote: >> Austin, are you saying that you've actually seen the circuit actually >> pass 3v across the switch ? > Yes, I have. OK, I will have to investigate further now that I know that it should. > Or are you saying that the output SHOULD be clamped >> around 2.3v ? > No, I am not. >> >> Perhaps I should've been more clear, but I am currently testing a new ckt >> board built with the circuit in xapp646 and a virtex2 part. >> With the IDT vcc at 3.3v, the maximum voltage I am seeing with a scope >> is about 2.3v on the output side of the switch. IE., when the virtex2 >> part is driving (lvttl) one side of the switch at 3.3v, the other side of >> the switch is about 2.3v. > Into what load? On a "real" PCI bus is where I made the measurements. > Perhaps if you use the resistor "standard termionation" (which is anything > but a standard) you will get some other result? My application uses the switch to connect a Virtex2 to a slow (1us access times) 5v TTL databus, not PCI. The virtex2 pins are configured as lvcmos33 iobuf and is the only device on one side of the quickswitch, so I don't think it is a loading problem. When the 5v FCT chip is driving the virtex2 (thru the switch), I see 4v on the ttl side but only 2.3v on the virtex2 side. > When the 5v ttl part 74FCT645 is writing to the virtex2 part, >> I see 4v on that side of the switch and 2.3v on the virtex2 side. > OK. I will admit that the TI part has some advantages, but it is also an > active device, and adds delay, doesn't it? No, the TI devices SN74CB3T3384 and SN74CBTD3384C are also FET switches with the same delay spec as the IDT part. The SN74CB3T3384 device uses a vcc of 3.3v while the SN74CBTD3384C uses 5v. Both devices claim to do 5v to 3.3 v level translation. > When I >> originally looked at xapp646, it stated that it would clamp to less than >> ~3v outputs, I didn't think it would be closer to 2.3v. If I had used >> the implementation in IDT's or TI's appnote where they drive the vcc pin >> at 4.3v using a diode from 5v, I wonder if that would have worked better. > Except that then you do not limit the voltages to less than what we > require. They do because the QS3861 clamps to VCC-1, so 4.3 - 1 = 3.3vArticle: 82540
Antti Lukats wrote: > "Stephane" <stephane@nospam.fr> schrieb im Newsbeitrag > news:d3jggu$kpo$1@ellebore.extra.cea.fr... > > Antti Lukats wrote: > > > "Stephane" <stephane@nospam.fr> schrieb im Newsbeitrag > > > news:d3j43r$e32$1@ellebore.extra.cea.fr... > > > > > I don't agree with you: here are the 32 configuration data bits: > > > > PAD209 X27Y127 IOB_X1Y127 F14 1 IO_L1P_D31_LC_1 > > those are Local Clock, the SelectMAP is 8 bit wide !!!! Actually the OP is correct - that IS supposed to be a 32-bit SelectMAP interface... the ug075.pdf pinout document discusses it briefly. I don't blame everone for being confused about it though - Xilinx makes just enough mention of it that you wonder if it might work, but when I asked my trusty FAE about it a few months ago, he said it is not supported at this time. Also, _LC pins are Low Capacitance pins (can't do LVDS output). Local clock pins are called _CC (for Clock Capable). Global clocks are thankfully _GC. > > >>so the minimum reconfiguration time for this part should be a little bit > > >>more than 7.4/100/32 = 2.3ms 7.4/100/8 = 9.25 ms, plus a little at the beginning and end. I'd budget at least 10ms, maybe a few more. Have fun! MarcArticle: 82541
> Regarding the guy who read the antique calculator ROMs in metal cans, could > he not have simply read them out by whatever method the CPU did? Yes, that would have been an option, but the protocol for reading these ROMs is more than just supplying an address and grabbing the data---the ROMs maintain internal state that an external reader would have to model, and multiple ROMs sit on the same serial bus. They're really more like coprocessors than ROMs. Cheers, Peter MontaArticle: 82542
I'm not sure whether you can get support from altera, if possible, they probably will give you the .pof file or the way to examine .pof file using Quartus II. Btw, try to use jam player to examine the device. I'm sure it is supported. "Andrew Holme" <ajholme@hotmail.com> wrote in message news:<1113389303.492837.293510@l41g2000cwc.googlegroups.com>... > Andrew Holme wrote: > > Andrew Holme wrote: > > > Kar wrote: > > > > You can always search the answer at the www.altera.com. > > > > > > > > > > [snip: text of articles] > > > > > > Thanks for the lead. > > > > > > For the benefit of others, those expired links are: > > > http://www.altera.com/support/kdb/2004/09/rd09092004_5239.html > > > http://www.altera.com/support/kdb/2003/12/rd12082003_9642.html > > > > > > Perhaps the most useful link for me, since I don't have the old > > > MAX+PLUS II software, and can't use "Examine" to create a POF file > > from > > > a fresh factory-shipped device, is this link which explains how to > > > blank a MAX7000S device using the JAM player: > > > > > > http://www.altera.com/support/kdb/1999/09/rd09131999_640.html > > > > After re-eading the above, I'm not sure if the JAM player "erase" > > method is an "erase" or a "blank" as defined in the other articles > i.e. > > does it tri-state the I/O pins? > > OK, having gone round the loop one last time (this blank/erased > business is very confusing!) I believe that reading a blank POF is > (quote) "The only way to get the I/O pins to tri-state again" - so I'm > hosed.Article: 82543
I download my design via byteblaster USB. The leds wink and blink as if the fpga is taking the data. At the end of downloading, the original configuration is still running with the 4 leds down counting and the nco running the two sine waves. I did change the default configuration of the unused pins to inputs tri state. I also downloaded all of the supplied configurations that came along with the board. No effect. Anyone have any suggestions? Tks JerArticle: 82544
James Beck wrote: > In article <upWdnTeAPOxSqcDfRVn-sA@buckeye-express.com>, abuse@127.0.0.1 > says... > >>praveen.kantharajapura@gmail.com wrote: >> >>>Hi all, >>> >>>This is a basic qustion regarding SDA and SCL pins. >>>Since both these pins are bidirectional these should pins need to be >>>tristated , so that the slave can acknowledge on SDA. >> >> >> No, both pins are not bidirectional. Only the master device drives the SCK >>line, and all slaves must leave their SCK's as input. > > > Not true, a slave device can extend a cycle through clock stretching and > the only way to do that is for the slave device to be able to hold the > clock line low. > > http://www.i2c-bus.org/clockstretching/ > Explain that to a Noob. Please.Article: 82545
Mark Jones wrote: > praveen.kantharajapura@gmail.com wrote: > > Hi all, > > > > This is a basic qustion regarding SDA and SCL pins. > > Since both these pins are bidirectional these should pins need to be > > tristated , so that the slave can acknowledge on SDA. > > > No, both pins are not bidirectional. Only the master device drives the SCK > line, and all slaves must leave their SCK's as input. I do not agree with u mark If a slave can't receive or transmit another complete byte of data until it has performed some other function, for example servicing an internal interrupt, it can hold the clock line SCL LOW to force the master into a wait state. > > > > But i have seen in some docs that a '1' need to be converted to a 'Z' > > while driving on SDA and SCL, what is the reason behind this???? > > > > Thanks in advance, > > Praveen > > > As others have said, usually SCK and SDA have a 1-10k pullup resistor to Vdd. > This makes the signal a 1 while no device is pulling a pin low. So set the > output pin to a zero, and toggle it as being an input versus an output, to > generate your digital signal. > > Since SCK is never "tristated", you can just drive it as 0/1 output using the > master device and omit the pullup resistor. Of course, the master device must be > able to source and sink a few mA. The PIC line of microcontrollers have no > problem doing this; the Atmels probably work the same. > > Furthermore, if the Atmels have a pin which is "open collector output only", > then that would work for SDA without needing to tristate it. Just set it as an > output and "0" will pull SDA low, and "1" will release it (alowing the pullup > resistor to make SDA "1".) > > For power-hungry applications, you can increase the pullup resistors at the > expense of speed and noise rejection. 100k works well for shielded, > battery-powered applications. > > Cheers, > MCJArticle: 82546
e wrote: > Has anyone investigated implementimg neural nets in FPGAs? I saw a paper maybe 10 years ago on a neural net set up in an FPGA (small by todays standards), It used bit serial arithmetic and was able to fit more nodes than one might have thought. IIRC, this was an early xilinx 4K family. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 82547
Antti Lukats wrote: > <praveen.kantharajapura@gmail.com> schrieb im Newsbeitrag > news:47cf10b7.0504130430.9a34497@posting.google.com... > > Hi all, > > > > This is a basic qustion regarding SDA and SCL pins. > > Since both these pins are bidirectional these should pins need to be > > tristated , so that the slave can acknowledge on SDA. > > > > But i have seen in some docs that a '1' need to be converted to a 'Z' > > while driving on SDA and SCL, what is the reason behind this???? > > > > Thanks in advance, > > Praveen > > well in order to drive '1' (EXERNAL RESISTIVE PULLUP) you need to Z the wire > eg tristate it. > 0 is driven as 0 > 1 is driven (or relased) as Z, ext pullup will pull the wire high In order to drive a '1' , i will not tristate it to 'Z' i will drive a '1' only. Any issues(Hardware malfunction) if i drive a'1' instead of 'Z' > > AnttiArticle: 82548
You might google "Federico Faggin" (my boss at Zilog 26 years ago) or Synaptics, his later company... Peter AlfkeArticle: 82549
You can read a summary here: http://www.i2c-bus.org/Termination.277.0.html But I recommend visiting the Philips site and downloading the specification: http://www.semiconductors.philips.com/markets/mms/protocols/i2c/ You can probably safely ignore the multimaster and high-speed modes, but you should be aware of their existance. I personally don't know of anybody using high-speed mode, but it does make the I/O requirements more interesting. -Keith
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z