Blog 4: Polynomial Anonymous Dynamic Distributed Computing without a Unique Leader

Since the last blog post, more data has been gathered compared to the last. We (me and my professor) have submitted the full version of the theory to journals with all the details now fixed. Also, 10 of the 13 pending executions for the deterministic algorithm have now finished!

As a result, we now have a lot more data. So in order to make our research and the paper more “story-able,” we need to make small modifications and run those simulations. The next step will be for me to plot the data using the GNU plot and find a relationship or correlation/causation that can be made story-able. The hope of the small modifications in phases in our research is to find the sweet spot that can find reduce the speed by reducing the rounds and number of phases for the algorithm. After all these modifications are made we can remotely run it for the HPC computer in the university virtually.

For me personally, from the undergraduate research experience, I learned much about working and communicating regularly with my professor and the ins and outs of working with an HPC computer along with learning new APIs such as Java aparapi along with now currently with learning GNU plot to plot the data in a way that is optimal for the paper. The initiative for the project was to give me hands-on experience and make me learn through taking action on figuring out and theorizing relationship and ideas with the research.

I have learned numerous languages and theoretical concepts from the experience and I plan to learn more by hopefully completing this research with the professor after the program and then trying to publish the work in a journal hopefully. The overall school year has to lead me to accomplish a lot and I am looking forward to accomplishing even more in the time to come.

Blog 3: Polynomial Anonymous Dynamic Distributed Computing without a Unique Leader

For the research, we have faced challenges such as the time for running the simulations and gathering results. We do have data for the paper when running the deterministic algorithm for the network of a size of around 30-40 nodes which is enough data to publish something for the paper but we are trying to figure out concurrently how to get the bug out of the code for the randomized algorithm in our simulations. 

Because the simulations have been taking a long time the professor and I have developed and are currently running new strategies of gathering information and even making theoretical improvements on the paper. 

The way we are approaching this is by reducing the number of phases in the simulations to only 1 phase and by doing this we will be getting rid of a nested loop within the code and reducing the running time from approximately O(n^4) to exactly O(n^3). What this will mean is that the simulations will run much faster by a factor of 10. However, there are some challenges that will be faced by doing this. 

One of the challenges is the idea of correctness. By reducing the number of phases there will be much less of the weight of each node pooling towards a single source and therefore in certain graphs where two graphs are separated by a long chain of singly linked nodes that could potentially and unfortunately give an inaccurate result for our results. In order to compensate for the fact that the executions will terminate with the wrong size for the graph, we are looking at how frequently this is done.

To figure that out we are running multiple redundant executions of the same simulation over and over again on the same input and comparing the overall results. The statistics of the data we have gathered will tell us whether any of this is of any practical use. 

The knowledge I have gained so far has been involved in problem-solving and coming up with theoretical ideas for running the simulation and brainstorming ways with the professor on how to speed up the simulations along with figuring out statistically if there is any practical use for identically simulations over and over again. Along with the knowledge, the hands-on experiences of coding and running the simulation on an HPC server has given me practical experience on what real-world research involves in a university setting. 

Our next hurdles ahead will be gathering up the data and then figuring out along with the data we have from the deterministic algorithm will be how to make this work “story-able” in essence how to make the paper compelling and noteworthy to be accepted at an upcoming conference such as IEEE or any other of that status. Along with that to as well gather that data to figure out what the data is trying to say and what real-world potential and discoveries the work can be used for by others in this area of research. 

The potential is great and so is the work itself of which I and the professor I do research with are looking much forward to exploring and working on much more deeply in the time to come.

Blog 2: A continuation in the study of Anonymous Dynamic Distributed Computing without a Unique Leader

Since the last blog the progress that me and my professor have been making so far in strategizing ways of running our simulations in a faster way to collect as much data as possible.

Some of the methods we have discussed have mainly involved paralleling the section of code involved in matrix and vector multiplication in which an adjacency matrix is multiplied by a vector to give us our result in terms of rounds of phases for each node.

The work so far has been very intensive in not only understanding very deeply the Java library of multithreading but also the process of parallelism within a computer. How there is a significant difference between working with a sequential machine and a parallel machine and how the difference has a significant hinderance on the progress of the simulations. Because of that I have been studying very intensely fundamental operating systems concepts of context switching ,proper thread synchronization,  managing multiple processes within a single thread, fork-joins-pool methods etc.

This has lead to good personal experience by learning that theory and practice are two very different things. Theoretically with proper multithreading we were able to figure out that our work could be speed up from O(n^3)  computational time to O(lg(n)) computational time. To put that into perspective something something that takes 1000 days in O(n^3) could take only one day with O(lg(n)).That’s a difference of one day to nearly 3 years!However in practice by having more threads we are creating more work to be done with each thread with parallel programming and thus that could lead to the creation of more time if done recursively enough and especially if it exceeds the number of available cores we have which before used to be only 12 cores; no more than an average laptop!

For this reason we have been working intensely on speeding things up as much as possible in computational time and have been working with .slurm files so that we can tell that machine not only the files we want to run but also the commands and directives in terms of how long we want the simulations to run and how many tasks/CPU cores each task should run which I will add has been a very nice learning experience for myself running the batch commands on a very powerful school computer that is primarily done for research purposes.

Also I have learned to regularly meet with the professor as I do research and have first hand understood the dynamics of student-faculty research by communicating very clearly and promptly with my professor about issues or concerns with running the simulations or the amount of time that needs to be accounted for an all the meta-engineering tasks/problems that are involved with writing a theoretical paper.

For now we are both highly motivated to gather the data/run simulations and study it as much as possible now that we both know we currently have access to a  G.P.U machine with 3,840 cores available for our work we look forward for the work ahead and the research to come.

 

Blog 1: Polynomial Anonymous Dynamic Distributed Computing without a Unique Leader

The research between me and professor Miguel Mosteiro involves computing the number of nodes in an Anonymous Dynamic Network(A.D.N) without having just one unique leader. A anonymous Dynamic Network involves a network of nodes that do not have identifiers and where communication links between the nodes may change in arbitrarily and continuously over time.

Node anonymity in ADNs is motivated by expected applications of such communication infrastructure. For instance, in ad-hoc networks embedded in the Internet of Things, nodes may have to be deployed in a massive scale, and having unique identifiers may be impractical or inconvenient. Moreover, low node cost expectations may introduce uncertainty about the number of nodes that will effectively startup. Hence, the need of Counting.

The nodes in the network are divided into two categories of nodes. The network has a total of “n” nodes with a total of “l” black nodes that represent the nodes in the network that are known and a counted for. The other set of nodes are the white nodes for which their are “l -n” nodes. The nodes in the networks are to be run using two algorithms that are known as “Methodical Multi-Counting” or M.M.C and “Leaderless Methodical Multi-Counting” or L.M.M.C.The M.M.C algorithm uses a previously created Methodical Counting algorithm that was made in a previous paper by my professor but instead this algorithm is to implemented on multiple nodes in the network simultaneously while L.M.M.C uses the M.M.C algorithm as a subroutine while at the same time introducing an element of randomness as a Monte Carlo protocol.  These two methods are to be run in several hundred computer simulations to illustrate whether their is a change in the running time of Methodical counting when it is multi-leader and whether have a deterministic algorithm vs a non-deterministic algorithm could show any significant difference or improvement in running time.

So far our methods have been to code the many parallel threads for the L.M.M.C algorithm in which each black node to count the neighbors it’s attached to that are not counted for or aka white nodes in the network. Once all the nodes have been counted their are to be summed up once all threads are done computing. The code for M.M.C has been typed up pretty quickly and what’s left to do is to run simulations.

The simulations that will be run involve various types of graphs such as stars,trees,paths etc with varying sizes of n and varying sizes of black and white nodes. With running the two algorithms on varying types of graphs to demonstrate the impact of these algorithms and the subtlest changes that can be found when doing this research. We have come across the problem that even though we requested and received university computers that can run simulations with several multi-core processors which will help run the multiple threads in our network we still unfortunately keep seeing a sluggish and ungodly long amount of time for each individual simulation to be run. One simulation which had a network size of just 60 nodes took nearly 3 days for a deterministic algorithm to finish  and the simulations we are planning to run were ambitiously in the 100s.

As a result we are currently trying to devise ways in which we are to run not all the simulations in an incremental sense but types of comparable simulations that will allow the reader to still see the significance and impact of the work that was done. We are also noticing with the simulations that we run that their are local minima and maxima of optimization of counting these nodes such as the time for 20 nodes may be shorter than 10 let’s say. The work still continuing obviously and this possible relationship needs to be studied much more in depth. For now the work continues and hopefully will yield promising results for the time to come.