The coding weeks are over!

So people, the coding weeks are over. This post is for a reference to the work done by me during this period highlighting the goals achieved and the outstanding work.

The task was to develop Gigaibit Ethernet Media Access Controller(GEMAC) (the MAC Sublayer) in accordance with IEEE 802.3 2005 standard using MyHDL. The aim was to test and help in the development of MyHDL 1.0dev, also demonstrating its use to the other developers.

In brief, work done includes developing Management Block and Core Blocks, i.e., Transmitter and Receiver Engine with Address Filter and Flow Control. Work left includes developing the interfacing blocks, i.e., FIFOs (Rx and Tx) and GMII.

Post Mid Term I started implementing core blocks. Midway I realised that I would be better off using Finite State Machines to implement these, which led me to rewriting the whole blocks. Currently, I am looking towards implementing the interfacing blocks; FIFOs (for which i shall try and use already developed blocks by other developers) and GMII(depends on the PHY, I will be using the one that comes on Zedboard, meaning i would be developing RGMII).

Tests for each blocks were developed using Pytest. Seperate tests were developed to test each unique feature and ensure its working. Also convertibility tests were developed to test the validilty of the converted verilog code which shall be used for hardware testing in the end.

Main Repo : https://github.com/ravijain056/GEMAC/

Links to PRs:
1.Implemented Modular Base: https://github.com/ravijain056/GEMAC/pull/1
2.Implemented Management Module: https://github.com/ravijain056/GEMAC/pull/4
3.Implemented Transmit Engine: https://github.com/ravijain056/GEMAC/pull/5
4.Implemented Receive Engine: https://github.com/ravijain056/GEMAC/pull/6

My main focus after I am done with this is to make this code approachable by other developers by providing various good examples of using the library.

Finite State Machines

Well Its been tough couple of weeks. Proceedings in my university has caused me to slow down a bit. But i have been making progress. I was working on RxEngine Block when things got too complex and I decided to take a step back and refer to use of Finite State Machines(FSMs) to develop much more simple and readable code. As it turns out i ended up rewriting the TxEngine Block from scratch as well. In midst of all this the file system in my local repo got too clumsy as i had multiple versions of RxEngine implementation and planned to wait out for the final revision of the blocks to avoid problems with commits later on while rebasing. I shall push the latest code in a day or two for review.

While implementing TxEngine block using FSMs I added the underrun functionality which was remaining in the previous implementation. Also I did a rough implementation of Flow Control Block which accepts request from client to send pause control frames and triggers the TxEngine for the same.

Also i had discussion about how to provide clocks to sub-blocks and handling the reset with Josy, one of the mentors, who suggested providing clocks to sub-blocks directly in the top block as opposed to relaying them through the sub-blocks. A good reason that i can think of to support it is that, if your system is a bit big and complex it might cause problems in simulation. I shall discuss more about it in detail in upcoming blocks.

Started Receive Engine!

Its been a long time since my last post!(2 weeks phew)! Sorry for the slump. Anyways During the period i successfully merged Transmit Engine after mentor’s review. I later realised that i missed adding functionality of client underrun used to corrupt current frame transmission. I shall make sure to add that in next merge.

Next I started looking towards GMII, which partly stalled my work cause i was unable to clearly understand what I have to do for that. So I decided to move on and complete Receive Engine with Address Filter First. Till now i have finished receiving the destination address from the data stream and filtering using the address table by matching it against frame’s destination address. If there is any match, the receiver starts forwarding the stream to client side, otherwise just ignores it.

Next i look forward to add error check functionalities to be able to assert Good/Bad Frame at the end of the transmission.

CRC32 : Transmit Engine

Completed first draft of implementation of Transmit Engine. The implementation was fairly straightforward barring the calculation of CRC32(Cyclic Redundancy Check) for Frame Check Sequence.

It stalled me for a day or two requiring patience while reading and understanding the type of implementations. A very painless tutorial for understanding crc32 and its implementation from ground-up can be found here. This implementation in C also helped

Now I have generated pull request for code review.

Started Transmit Engine!

Yay readers, good news. I got through the mid-terms and received the payment. Feels good!

About the project. I started of with implementation of transmit engine. Sweeping changes had to be made in the the interfaces of the sub-blocks. Notable changes:

  • Removal of Client Sub-block and interfacing the FIFOs directly with Engine.
  • Addition of intrafaces, i.e., interfaces between the sub-blocks.
  • Moving the configregs(Configuration Registers) and addrtable(Address Table) out of management block to main gemac block to improve its scope to other subblocks. Now its accessed by management block through ports.

As a result of changing the ports of management block i had to edit the test_management to reflect the change. I had independent instantiation of the management block in every test which was redundant. I then looked up into pytest fixtures which enabled me to have a common function which would be run before every test thus removing the redundancy. It provides convenience to change the block port definitions in future if needed.

Now i am working on implementing its features. A little about Transmit Engine:

“Accepts Ethernet frame data from the Client Transmitter interface,
adds preamble to the start of the frame, add padding bytes and frame
check sequence. It ensures that the inter-frame spacing between successive
frames is at least the minimum specified. The frame is then converted
into a format that is compatible with the GMII and sent to the GMII Block.”

GSoC: Mid-Term Summary

Well four weeks of GSoC is over, and its time for the mid-terms summary and replanning.

Mid-Term Summary:

  • Studied about MACs and their working. Chose Xilinx User Guids 144 (1- GEMAC) as interface guide and reference verilog design as features guide.
  • Completed setup of main repo providing the modular base for further development.
  • Implemented Management Sub-block.
  • Setup the repo with travis-ci build, landscape linting, coveralls.

So comparing with the timeline in the proposal i have achieved targets of first four weeks switching the management and tx engine modules.

Further Plans:

  • Take three other sub-blocks mentioned in the proposal timeline and try and implement them in a week each (rather than two weeks as proposed).
  • Implement wrapper blocks including FIFO and client in the next week.
  • In the remaining weeks, hardware testing, refactoring code if necessary, setup of readthedocs shall be done.

Maintain A Clean History!

Finally I have completed and merged the management module. Last time I posted, things i needed to be able to merge was to add the doc-strings, setup coveralls, resolve conflicts with master branch(rebase).

Adding Doc-Strings was the easiest but still took time as it gets a little boring(duh!). I used this example provided by my mentor as reference.

Now came time to do a coveralls setup, which i must say i a little more complex compared to the others. I really got a lot of help from referencing an already setup repo test_jpeg on which a fellow GSoCer is currently working. It got a little tricky in between as i stumbled upon the type of end-of-line character problem. Before this i didn’t even know that even an “type of enter” can cause problem in running scripts. It consumed my one whole day. It bugged me when i was trying to edit it in notepad on Windows. This post later helped get me over it. More on coveralls setup on my next post!

Next Rebasing and resolving conflicts my dev branch compared to master branch. When i started my master branch was a few commits ahead (setting up of badges) and thus was having conflicts. Also Rebase was required as my mentors suggested to maintain clean history in the main branch. It took me lot of experiments to finally understand the way to go for rebasing my branches. The structure of my repo:

  • origin/master
  • origin/dev
  • dev

So i have a local dev branch in which i develop my code and constantly push to remote origin/dev branch for code reviews by my mentor. This leads to lot of commits containing lot of small changes and resolving silly issues. But when i make a pull request and merge onto origin/master branch I wish to have cleaner commit history.

So doing an interactive rebase helps to modify that history using pick(Keep the commit), squash(Merge onto previous while editing the commit description), fixup(Merge onto previous keeping the previous commit description intact). Understanding this required me doing lot of experiments with my branch which is dangerous. So I had made a copy of my dev branch, which i suggest you do right now before continuing.

To rebase your local branch onto origin/master branch use “git interactive -i <base branch>“. Warning, avoid moving the commits up or down if the are working on the same file. This may cause conflicts. Once it starts, Resolving conflicts is lot of pain because it triggers other conflicts as well if not done properly.

After rebasing come the trickier part. Your local branch has brand new rebase commits and your remote has old commits. You need to use “git push –force”. It will overwrite the commits on remote branch after which you can generate a pull request onto origin/master. Don’t do it if there are other branches based on this branch In that case directly merge onto master, downside being you wont get to be able to make pull request on which is essential for code discussions.

After all this my code was ready to merge and i got go ahead (after a day of internet cut, hate that) from my mentor to merge it. So i had finally completed second merge on to my main branch implementing the management block and setting up coveralls.

 

GSoC Update: Stuck with conversion

In the past week I implemented new feature for management block – Address table read/write which shall be used for address filtering purposes and updated the test suite accordingly without much problems.

Then I started looking for cosimulating the management subblock that had been implemented. It took me a while to understand the concept. After talking with my mentors, i chose to leave cosimulation for verification of converted code for top level constructs and use simple convertible testbenches to verify generated V* codes for subblocks.

While pursuing that i faced a lot of issues and uncovered some issues with the conversion about which i have posted in discourse(Verilog VHDL) in detail.

So next,  i should develop tests for the myhdl core to cover some of the issues mentioned in the discourse and make a pull request. After that I should get done with this module within this week, completing it will good documentation as well.

GSoC: Management Block – Half Work done!

Its been seven days from my last blog update. Time Flies! One thing i wish i wouldn’t have done this days was to procrastinate blogging(My org requires me to blog 3-4 time a week).  Other than that, this week has been the best one yet. It certainly gets much easier once you start off.

During  my last blog as I said I was still figuring out the proper way to use FIFOs with the GEMAC core and the interfaces. While going through the Xilinx user guide 144 document i found that they had documented the interfaces of the FIFO too. So the problem of figuring out the interfaces was solved easily. Next I mentioned I will be developing some tests for the GEMAC core as a whole. Well I couldn’t do that cause I had never written tests before and I didn’t really have the experience to be able to identify the transactors(structure) of the package before really implementing it.

So I decided to move on and pick a module or the block and implement it(Term used in MyHDL as the term ‘module’ clashes with a different term with different meaning of Python; I will be using term ‘block’ for the rest of my blog series to refer to hardware module). I chose management block to start with as opposed to Transmit Engine as mentioned in my Project Proposal because i thought that it would be better to know the configuration registers based on what the other blocks base their behaviour on.

To start with again I couldn’t start with the tests (and my org suggesting having a test-driven approach) for the same reason as above. So instead I started of by implementing the features of the block directly. First feature I added was to read/write the configuration registers(basically a list of signals) present in the block. Then I quickly moved on to another feature, converting host transactions into MDIO MII transactions, which is roughly speaking, converting parallel data into serial data(write operation) and vice-versa(read operation).

Well once I was completed with the first version it was pretty clumsy and nowhere close to one the people at the org desired. On my defence, it was my first time doing developing a real block other than just writing some silly examples here and there. At this point I made a pull request to the main repository, knowing that the block isn’t complete yet, desperately in need of reviews from my mentors. Beware till this point I had done no simulation or testing to whether my code was correct.

Well the reviews came in and i started adding tests that simulated the features using the new API presented in the MEP-114 one by one. One small change from the API mentioned is while using ‘inst.config_sim(trace=True)’. Note that there is no parameter ‘backend’ for config_sim unlike mentioned in the MEP(MyHDL Enhancement Proposal). I used Impulse plugin in Eclipse to view the traced Signals produced by simulation.

I added tests one by one and followed something like Test-First Approach after that, i.e., adding test and tweaking the code until the test passes. The tests added for MyHDL simulation till now are read/write configuration register, MDIO Clk generation(Generated using host_clock), MDIO write operation. This lead to what I would say, second version of the implementation, which was still far away from the current version., quite complicated but still passing the tests. I learned use of TristateSignal and decided to leave it until the top level block to implement.

Next came the test which checked the convertibility of the code to other HDLs, which taught me quite a few things which helped me to relate the code written to the actual hardware implementation. It led to me optimising my code a lot better and trim down on unnecessary stuff, providing the current version of the code, which was convertible.Things I learnt while making the code convertible:

  • You cannot use yield statements in the processes.
  • You can have class with Signal as attributes for local Signals.
  • You cannot use class functions in the procedural code.
  • In the local functions, all the Signals used should be passed as arguments, regardless of the scope. Classes mentioned in the first point cannot be passed as a argument.
  • Signals can be driven only through one of the processes, i.e, you cannot perform ‘sig.next = <some value>’ in two different processes.

Now i shall refine and add some more tests for the next two days and after that work on cosimulation of the block.

Get your own badge! (Part-1)

Recently, Week-2 of GSoC 2016 concluded when my mentor asked me to setup the main repo with integration tools: ravis-ci, landscape.io (linting), coveralls (code test coverage),  readthedocs and add badges for each of them.

My first response was, What are badges? This are markups provided by the integration tools showing corresponding score or stats, supposed to be added to Readme.md file of your repository. Rest of the blog is on how to get one for your own-repo.

 

Travis-CI:

Travis-CI is an integration tool which tests and deploys your code everytime you make any commit for all your branches and pull requests as well. The tests are run according the script provided by you in .travis.yml under your branch.

Setting it up is quite easy. Head to the website travis-ci.org. Sign Up using your Github Account and follow the instructions to link your repo. Remember to switch-on your main repo at your travis account. You can modify switches to your different repos anytime by accessing your profile at travis-ci.org. Straight-forward guides for writing script file .travis.yml in almost all languages are provided by travis-ci.org here.

After Travis-Ci integration is over, sign in in your travis-ci account. Select your main repo from the left panel. You can see the badge next to your repo’s name like this.

Untitled.png

Click on the badge. Select your main branch and ‘Markdown’. Copy the link provided to your Readme.md file to get the badge on your main account.

 

Landscape(Linting):

After every code push, Landscape runs checks against your code to look for errors, code smells and deviations from stylistic conventions. It finds potential problems before they’re problems, to help you decide what and when to refactor.

To help you setup, everthing is documented here. Add the repository you want to perform checks on.

After setting up, go to your dashboard by logging into landscape.io account, Click  on your repository. Badge will show up on top right corner as below.

Untitled.png

Click on it and copy the markdown link into Readme.md file of your repo to get the badge on github.

 

Part – 2 will contain setting up of coveralls and readthedocs will be posted soon!