Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Ok, I'm not designing an Ethernet MAC but I see that people now make FPGA cores for this. I had a few "general" questions on how these might be implemented. 1. I know Ethernet bits flow across as Manchester encoded data. I see that many of the cores run from a 25 MHz clock. How is the 125 Mbps (8/10 encoded data) serial stream generated without some sort of clock multiplication? 2. How does the clock / data recovery work from the 25 MHz clock too? I've seen examples of Manchester decoding and usually there is some high-speed clock available that it used to examine the preamble/sync bits and this is clocked into a shift register at 4-8x line frequency. The shift register is used to adjust a counter which determines when to sample -- or so my understanding goes. Are there any good resources that explain the "low-level" details of how hardware Ethernet implementations might work? Thank you. H.Article: 84751
Piotr Wyderski wrote: > amyler@eircom.net wrote: > >> You don't need to connect the negative input to GND. >> Just connect your single ended clock to one of the >> CLK[3..0] inputs. > > > What should I do with the unused inputs? > Just leave them floating in the air? Never have open indputs. They might self oscillate and draw a lot of power. Pull them low, pull them high, make the output. Rene -- Ing.Buero R.Tschaggelar - http://www.ibrtses.com & commercial newsgroups - http://www.talkto.netArticle: 84752
I couldn't find one. In the end i had to design a quick labview program, which, for my application, took text and converted it into ascii. Matt "Jedi" <me@aol.com> wrote in message news:nvYke.91$he7.63@read3.inet.fi... > Hello.. > > Is there any tool that generates the .mem ROM files from a > binary or intel hex file compatible with Lattice Module/IP Manager? > > > regards > jediArticle: 84753
> Are there any good resources that explain the "low-level" details of how > hardware Ethernet implementations might work? http://www.fpga4fun.com/10BASE-T.html Cheers, JonArticle: 84754
What you are discussing here is all done by the phy chip. You don't have to care unless you are replacing it yourself Simon "Hw" <localhost@com.com> wrote in message news:N1ele.34256$ya2.32286@tornado.socal.rr.com... > Ok, > > I'm not designing an Ethernet MAC but I see that people now make FPGA > cores for this. > > I had a few "general" questions on how these might be implemented. > > 1. I know Ethernet bits flow across as Manchester encoded data. I see > that many of the cores run from a 25 MHz clock. > > How is the 125 Mbps (8/10 encoded data) serial stream generated without > some sort of clock multiplication? > > 2. How does the clock / data recovery work from the 25 MHz clock too? > > I've seen examples of Manchester decoding and usually there is some > high-speed clock available that it used to examine the preamble/sync > bits and this is clocked into a shift register at 4-8x line frequency. > The shift register is used to adjust a counter which determines when to > sample -- or so my understanding goes. > > Are there any good resources that explain the "low-level" details of how > hardware Ethernet implementations might work? > > Thank you. > H.Article: 84755
Hi, We have the following questions related to Virtex 4 configuration frames: a). What is the shape and size of a frame in Virtex 4 device? In Virtex 2, each frame is contained in one vertical column of the device. But we could not find any information related to the shape of frame in Virtex 4. b). How many CLBs does one frame include or vice versa? c). In one datasheet, it is written that Virtex 4 is a tile based device. Does it mean that each configuration frame is completely contained in one tile? If yes, how many frames/CLBs does one tile contain? d). Can we use the Xilinx JBits SDK to configure a Virtex 4 frame? The documentation for JBits 3.0 does not refer to Virtex 4 and only seems to support Virtex 2. If it doesn't support Virtex 4, what is the equivalent toolkit for Virtex 4 devices? Please do help as we could not find information related to this in Xilinx website or in newsgroups. Thanks in advance, Love SinghalArticle: 84756
Hello... I know that ARC A4 RISC core is mainly intended to go into ASIC... but is there any development platform using this core inside FPGA besides the ARC angel from arc.com? thx in advance jdeiArticle: 84757
Hi Peter, I am not able to use the debug halt signal. I am using a customized board and it does not have buttons which I can use (I will give it a try by using virtual IO with chipscope). I understand that the principle of starting up like I described should work?! If I convert the generated .elf file to a binary file, I see that the boot0 section is laid at the correct address. I can also see here what value my user peripheral should return ("4BFF7FE4"). It is still not working, I checked the peripheral by placing it to another location and have brams from 0xffff0000 till 0xffffffff. So a normal startup is done and software is running fine. Then I do a 32-bits read action to my peripheral and I read what I expected to read. What could be wrong? Frank "Peter Ryser" <peter.ryser@xilinx.com> wrote in message news:d73j3d$afq1@cliff.xsj.xilinx.com... > Frank, > > here is how to debug the setup. > > First of all, an opcode 4BFF7FE4 at address -4 (aka 0xfffffffc) as > > (gdb) x/x -4 > 0xfffffffc: 0x4bff7fe4 > > will resolve to the following instruction > > (gdb) x/i -4 > 0xfffffffc: b 0xffff7fe0 > > In other words, assuming your boot peripheral is correct, the processor > will jump to 0xffff7fe0. That's where you need to map the .boot0 section > or more correctly _boot0. > > Now, to see what's happening you should bring the DEBUG_HALT signal of the > PPC to a user IO pin, for example one of the push buttons on the board. > The DEBUG_HALT signal allows stopping the processor, i.e. keep it at the > reset vector after a reset or after loading the FPGA bitstream. > Assert the DEBUG_HALT signal and load the bitstream. Connect with XMD. > Deassert DEBUG_HALT. The PPC is still stopped because of the debugger. > Single-step the processor. By following the PC you will be able to see > what the PPC is doing. If something in your setup is wrong you will most > likely end up at address 0x????0700, the exception vector address for an > invalid instruction exception. > > To monitor the bus transactions instantiate ChipScope (BTW, instead of > bringing the DEBUGH_HALT signal above to a user IO pin you could hook it > up to a Virtual IO port) and trigger on the PLB request signal. > > If you try to minimize the boot code, i.e. not use a BRAM at the reset > vector, you might want to have a look at application note XAPP571 > (http://www.xilinx.com/bvdocs/appnotes/xapp571.pdf). > > - Peter > > > > Frank van Eijkelenburg wrote: >> I have made a powerpc system with a user peripheral connected to the plb >> bus. The user peripheral has an address range from 0xffffff00 - >> 0xffffffff. At startup the powerpc starts executing from 0xfffffffc, so >> my peripheral is accessed at startup. In case of a plb bus read action, >> it always returns "4BFF7FE4" which is a jump to my boot0 section laid in >> bram. In the linkerfile I forced the boot0 section to a specified >> address, so the jump should be correct anytime. However, the system is >> not starting (no output from my uart port). Should the described method >> work, or do I forget something? If it should work, what could be wrong (I >> already tried to return "E47FFF4B" in case the byte order may be >> incorrect, but it gave the same result). >> >> TIA, >> Frank >Article: 84758
Why would you want one? There are lots of better CPUs than the A4? Cheers, JonArticle: 84759
Rene Tschaggelar wrote: > Never have open indputs. They might self oscillate and > draw a lot of power. Pull them low, pull them high, > make the output. If you mean I/O, then you are absolutely right, but I would like to know what should I do with unused CLKx inputs? The manual says that they don't have an alternative I/O function, so I am unsure whether these inputs need a special treatment. My first idea was "connect them to GND", but AMyler has told me "You don't need to connect the negative input to GND"... Best regards Piotr WyderskiArticle: 84760
Hi Piotr, Just to clarify, when I said "You don't need to connect the negative input to GND" what I meant was that you can use the other CLK pin for another input such as your system reset, which can then use a global clock net to propagate around the device from that pin. But if you don't need to use it at all then sure just go ahead and connect it to GND. AlanArticle: 84761
Hi all, I'm a low-level kind of programmer (and wannabe hardware designer). When I write a piece of code, be it assembly, C or Verilog, I need to have a precise idea of how it will end up. I've began to explore the wonderful world of FPGAs and of the Verilog hardware description language, and while I do understand that combinational logic can be reduced to its minimum terms, and I also understand latches and other sequential logic, I have big problems in figuring how a state machine ends up ("schematically speaking"). Although I've hunted for, and found, some schematics of state machines, I still haven't found a clear explanation and description of them that makes me sleep at night. ;) My aim is, other than understanding state machines at a intuitive level, to write them in the most efficient way. When I'm writing combinational code, I know when it will end up in too many logic gates.. same for sequential, but for state machines the "black box" is actually into my mind. Many Thanks in advance for any attempts to illuminate me. Greets, MikeArticle: 84762
http://www.cs.umd.edu/class/spring2003/cmsc311/Notes/Seq/impl.html or just type "Mealy Moore" in google search Aurash nospam@nospam.com wrote: >Hi all, > >I'm a low-level kind of programmer (and wannabe hardware >designer). When I write a piece of code, be it assembly, >C or Verilog, I need to have a precise idea of how it >will end up. >I've began to explore the wonderful world of FPGAs and >of the Verilog hardware description language, and while >I do understand that combinational logic can be reduced >to its minimum terms, and I also understand latches and >other sequential logic, I have big problems in figuring >how a state machine ends up ("schematically speaking"). > >Although I've hunted for, and found, some schematics of >state machines, I still haven't found a clear explanation >and description of them that makes me sleep at night. ;) > >My aim is, other than understanding state machines at a >intuitive level, to write them in the most efficient way. > >When I'm writing combinational code, I know when it will >end up in too many logic gates.. same for sequential, but >for state machines the "black box" is actually into my >mind. > >Many Thanks in advance for any attempts to illuminate me. > >Greets, >Mike > > > -- __ / /\/\ Aurelian Lazarut \ \ / System Verification Engineer / / \ Xilinx Ireland \_\/\/ phone: 353 01 4032639 fax: 353 01 4640324Article: 84763
pasacco wrote: > Hi > > Thankyou for comment.... > Now I am making effort fixing,,,,,,with no luck.... > But I found that this problem is not related with BRAM..... > > Is someone aware of the information below? > I am doing XST synthesis (without special constraint and without > UCF)..... > > Thankyou for comment again :) > > ----------------------------------------------------------------- > INFO:LIT:95 - All of the external outputs in this design are using slew > > rate > limited output drivers. The delay on speed critical outputs can be > dramatically reduced by designating them as fast outputs in the > schematic. I'm guessing all of your outputs are defaulting to LVTTL, which also defaults to slow slew rate. If you are meeting your output timing (clock to out usually, or pad to pad), this can be ignored. If some outputs need to be faster this message is letting you know that you can change the slew rate to "FAST" to pick up a couple nanoseconds. > INFO:MapLib:562 - No environment variables are currently set. This just means that the mapper didn't see any system environment variables that would change its behaviour. Therefore only the command line arguments determine the mapper's behaviour. This is usually O.K. unless you need some fairly advanced options. > INFO:MapLib:535 - The following Virtex BUFG(s) is/are being retargetted > > to > Virtex2 BUFGMUX(s) with input tied to I0 and Select pin tied to > constant 0: > BUFG symbol "CLK_bufg" (output signal=3Dclk_int2) You've used the generic library BUFG in your design, which is no longer a primitive for Virtex 2 and Spartan 3. This just lets you know that the buffer is actually implemented with the BUFGMUX primitive with its select line grounded. > ------------------------------=AD------------- ------------------------ Generally speaking, "INFO:" lines are not warnings, but are there to let you know that you might have an opportunity to improve your design's performance. I wouldn't worry about making this kind of message go away. For example you could explicitly call out a BUFGMUX for your "CLK_bufg" to eliminate the last message, but I think using the more generic BUFG is more readable and portable, so I wouldn't do it. If you're having trouble getting your design to work, I would suggest adding some timing constraints to ensure a better mapping, place and route.Article: 84764
Austin Franklin wrote: > Hi Dave... Gary, are you still listening? Like I said before: "If you are considering learning VHDL (or Verilog), my advice is a resounding YES; Do it, learn it." Austin, Austin..... More than two decades working with programmable logic and you are NOT READY to suggest that learning VHDL or Verilog would enhance the skill set of a schematic designer! Tell the truth, you want to be the only one with multiple skills don't you!! ;-) I whole heartedly encourage Gary to "code in HDL"; he already uses schematic entry and doesn't require any encouragement maintaining that skill.Article: 84765
nospam@nospam.com wrote: > My aim is, other than understanding state machines at a > intuitive level, to write them in the most efficient way. > > When I'm writing combinational code, I know when it will > end up in too many logic gates.. same for sequential, but > for state machines the "black box" is actually into my > mind. > > Many Thanks in advance for any attempts to illuminate me. > > Greets, > Mike > Mike- One thing you could do is download one of the free synthesis tools from Xilinx or Altera and write a couple simple machines, then look at the schematic version of their product. For simple machines, it should be pretty easy to see what the tool did. JakeArticle: 84766
The Ethernet MAC pass the data to the Ethernet PHY at a rate of 100Mbps. It pass it via standard MII protocol, which is a 4-bit wide data bus (4 bits for TX, 4 for RX). The MAC doesn't have any knowledge of the actual encoding of the signal on the wire. This is the PHY's job. You could well use the same MAC for either 100B-TX or 100B-FX for example (electrical or optical). Manchester encoding use the same rate than the data. I.e. 100Mbps. The PHY thus do have a clock multiplier. Manchester encode a 0 as '01' and 1 as '10'. The pattern of bits " 0 1 1 1 1 0 0 1 " encodes to " 01 10 10 10 10 01 01 10". Another more curious example is the pattern " 1 0 1 0 1 etc" which encodes to "10 01 10 01 10 " which could also be viewed as "1 00 11 00 11 0 ". Thus for a 10 Mbps Ethernet LAN, the preamble sequence encodes to a 5 MHz square wave! (i.e., One half cycle in each 0.1 microsecond bit period.)Article: 84767
When you wrote your peripheral did you take into account that it has to handle four-word cache line transactions? The instruction side of the processor will issue such a transaction to the peripheral when it comes out of reset. - Peter Frank van Eijkelenburg wrote: > Hi Peter, > > I am not able to use the debug halt signal. I am using a customized board > and it does not have buttons which I can use (I will give it a try by using > virtual IO with chipscope). I understand that the principle of starting up > like I described should work?! If I convert the generated .elf file to a > binary file, I see that the boot0 section is laid at the correct address. I > can also see here what value my user peripheral should return ("4BFF7FE4"). > It is still not working, I checked the peripheral by placing it to another > location and have brams from 0xffff0000 till 0xffffffff. So a normal startup > is done and software is running fine. Then I do a 32-bits read action to my > peripheral and I read what I expected to read. What could be wrong? > > Frank > > > "Peter Ryser" <peter.ryser@xilinx.com> wrote in message > news:d73j3d$afq1@cliff.xsj.xilinx.com... > >>Frank, >> >>here is how to debug the setup. >> >>First of all, an opcode 4BFF7FE4 at address -4 (aka 0xfffffffc) as >> >>(gdb) x/x -4 >>0xfffffffc: 0x4bff7fe4 >> >>will resolve to the following instruction >> >>(gdb) x/i -4 >>0xfffffffc: b 0xffff7fe0 >> >>In other words, assuming your boot peripheral is correct, the processor >>will jump to 0xffff7fe0. That's where you need to map the .boot0 section >>or more correctly _boot0. >> >>Now, to see what's happening you should bring the DEBUG_HALT signal of the >>PPC to a user IO pin, for example one of the push buttons on the board. >>The DEBUG_HALT signal allows stopping the processor, i.e. keep it at the >>reset vector after a reset or after loading the FPGA bitstream. >>Assert the DEBUG_HALT signal and load the bitstream. Connect with XMD. >>Deassert DEBUG_HALT. The PPC is still stopped because of the debugger. >>Single-step the processor. By following the PC you will be able to see >>what the PPC is doing. If something in your setup is wrong you will most >>likely end up at address 0x????0700, the exception vector address for an >>invalid instruction exception. >> >>To monitor the bus transactions instantiate ChipScope (BTW, instead of >>bringing the DEBUGH_HALT signal above to a user IO pin you could hook it >>up to a Virtual IO port) and trigger on the PLB request signal. >> >>If you try to minimize the boot code, i.e. not use a BRAM at the reset >>vector, you might want to have a look at application note XAPP571 >>(http://www.xilinx.com/bvdocs/appnotes/xapp571.pdf). >> >>- Peter >> >> >> >>Frank van Eijkelenburg wrote: >> >>>I have made a powerpc system with a user peripheral connected to the plb >>>bus. The user peripheral has an address range from 0xffffff00 - >>>0xffffffff. At startup the powerpc starts executing from 0xfffffffc, so >>>my peripheral is accessed at startup. In case of a plb bus read action, >>>it always returns "4BFF7FE4" which is a jump to my boot0 section laid in >>>bram. In the linkerfile I forced the boot0 section to a specified >>>address, so the jump should be correct anytime. However, the system is >>>not starting (no output from my uart port). Should the described method >>>work, or do I forget something? If it should work, what could be wrong (I >>>already tried to return "E47FFF4B" in case the byte order may be >>>incorrect, but it gave the same result). >>> >>>TIA, >>>Frank >> > >Article: 84768
amyler@eircom.net wrote: > But if you don't need to use it at all then sure just go ahead and > connect it to GND. OK, now it's perfectly clear, thank you! Best regards Piotr WyderskiArticle: 84769
First keep in mind that while 25M for FastEthernet is still available slowly more and more there is a move to 50M with 2 bit known as RMII as 50M and RMII have simple implementation and clock is still relatively slow while the benefit of half the pin in multi-port is tremendous, and to some extend might be even easier for you as you have single source of clock and no more rx and tx clock each have its own ppm's. Manchester is generally speaking used only in Fiber and not over cupper. As for Manchester Freq since each bit is represents by two bits (0 become 01, 1 become 10) the Manchester frequency is double the Data frequency. As for FastEthernet Cupper the freq over the wire as far as I know will be 100M from Mac to Phy let say using 25Mx4 Than in the phy after the 4/5 bit encoding 25x4/5 =125 Than after MLT3 125x2/3= ~83M. While 100x10/8 is also giving 125 keep in mind that 8/10 bit encoding is used in GigEth and not in FastEth which use 4/5. The main difference is that both give extra range for special character however the 8/10 also take into account how many zero and one are send and balance them while 4/5 don't. Never the less as mention before you don't need to worry about the encoding and frequency over the line as this is the phy responsibility and not the MAC. Talking about Manchester there are two way to sync, one which use a pre known stream send like preamble (The problem of sync is since if you see infinite stream like ...10101010... you don't know if the source was all 0 or all 1). However in my opinion a better way to sync if you have the option to do it is to look for change. The interesting thing in Manchester is that when you get something like ....001 you have no clue about the past but you know for sure that the last 2 digit is due to 0 in the source and you are now in sync. Similar ......110 tell you the last two digits are due to 1 in the source and you are now in sync. Have fun.Article: 84770
Hi, Here is my attempt to answer your question: http://www.engr.sjsu.edu/~crabill/vlogfsm.pdf EricArticle: 84771
Group, I am going to buy some EPM1270T144C5 from digikey.com but they listed both EPM1270T144C5 and EPM1270T144C5N. Whats the difference? I searched google.com and altera.com but found nothing. Thanks vax, 9000Article: 84772
Dear One error occurred when I configure V2pro-30 in Project Navigator 6.3. "Specified PROM file untitled.mcs is too large. Please target a larger PROM" What I did was * Generate PROM FILE : iMPACT -> PROM file ->Xilinx Serial PROM -> Auto Select PROM -> Finish Until then, everything seemed ok. * Configure Device(iMPACT) 'Boundary Scan Mode' -> Automatically connect to cable and identify -> 2 devices (xccace and v2pro30) found -> configuration file (untitled.mcs) open -> and then finally the error message above appeared. The PROM is around 2MB but my configuration file is around 4MB. How can I shoot this trouble? Thankyou for any remarkArticle: 84773
From a search on Altera.com for RoHS, the "lead free" page (2nd hit) came up with: A partial list of Altera's available lead-free devices is shown in Table 1. Lead-free components are denoted by an "N" suffix at the end of the part number, e.g., "EP1C12F324C6N". It's a sincere disappointment that the part numbering guide on their site doesn't include the "N" i nthe "Suffixes" section. "vax, 9000" <vax9000@gmail.com> wrote in message news:d752cn$286$1@charm.magnus.acs.ohio-state.edu... > Group, > I am going to buy some EPM1270T144C5 from digikey.com but they listed both > EPM1270T144C5 and EPM1270T144C5N. Whats the difference? I searched > google.com and altera.com but found nothing. > > Thanks > > vax, 9000Article: 84774
I can answer some of your questions. The user guide (available from xilinx.com) should have more info. A height of 16 CLB (4 BRAM, 4 DSP48, 32 single IO) Constitutes a section of 41 * (32 bit) words = 1312 bits = 1 frame. The frames thus do not run the entire column as in V2. Each block CLB*16/BRAM*4 etc does ofcourse need a slightly different number of frames to program it. It takes ~22-23 frames to program 16 CLB's or 4 BRAM's. http://www.xilinx.com/products/virtex4/literature.htm - Vic Love Singhal wrote: > Hi, > > We have the following questions related to Virtex 4 configuration > frames: > > a). What is the shape and size of a frame in Virtex 4 device? In Virtex > 2, each frame is contained in one vertical column of the device. But > we could not find any information related to the shape of frame in > Virtex 4. > > b). How many CLBs does one frame include or vice versa? > > c). In one datasheet, it is written that Virtex 4 is a tile based > device. Does it mean that each configuration frame is completely > contained in one tile? > If yes, how many frames/CLBs does one tile contain? > > d). Can we use the Xilinx JBits SDK to configure a Virtex 4 frame? The > documentation for JBits 3.0 does not refer to Virtex 4 and only seems > to support Virtex 2. If it doesn't support Virtex 4, what is the > equivalent toolkit for Virtex 4 devices? > > Please do help as we could not find information related to this in > Xilinx website or in newsgroups. > > Thanks in advance, > Love Singhal >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z