Exact Enumeration of Solid Partitions

This is the home page for the project set up for counting solid partitions. The goal of the project is to count the number of solid partitions of all positive integers $\leq 100.$ At the start of this project in September 2010, the numbers are known for positive integers $\leq 50.$ The tentative schedule for this project are as follows:

Year 2010 2011 2012 2013 2014 2015
N 52 60 70 80 90 100
Est $\mathbf{p_3(N)}$ $10^{14}$ $10^{16}$ $10^{18}$ $10^{20}$ $10^{22}$ $10^{24}$

As one can see, each step involves a hundredfold increase in requirements. This can be achieved in several ways: improvements in the code, increasing the number of branches, new ideas in parallelization, faster computers etc. Will we be able to achieve the above schedule? Only time will provide the answer to this question.

Results

Date Oct. 2010 Nov. 2010 Dec. 2010 June 2011 Jan 2013
N 52 55 62 68 72
Code used v2.0 v3.0 v3.2 v3.2 v4
Branches 1491 1542 4167 18334 118794

Current state of the project: Paused

We have a working program written in C++ that can reproduce some of the known numbers in reasonable time.

  • [Sept 12, 2010] The code has been parallelized with 1491 branches and tested to reproduce known results for N=40 (with the help of Naveen Sharma).
  • [Sept 13, 2010] Starting running the long branches (for N=52) on the Leo supercluster at IITM. Began the process for setting up the interface for volunteer computing. Arun Chaganty suggests that we run code with O3 optimization and it is seen to provide a factor of three improvement in run times. So we expect the longest branches to run in less than 5 days (on Leo) instead of the initial estimate of 10 days.
  • [Sept 14, 2010] Have a shell script that validates submissions of results.
  • [Sept 21-22, 2010] Runs A and B on all three trunks(=451/1491 branches) completed. Used leo/vega/physics/curie to run all the jobs. Eventually jobs were bunched together (‘undoing’ some of the parallelization) so that run times for each bunch were of the order of 5 days. Awaiting the setting up the portal for volunteer computing so that the remaining runs are carried out by volunteers.
  • [Oct 16, 2010] All the N=52 runs have been completed. Need to combine them to get the number of solid partitions. Benchmarking for the N=60 runs being carried out. It appears that run times (for N=60) increase by a factor of 360 with respect to the N=52 run times.
  • [Oct 17, 2010] Completed the N=52 run and we now have two new numbers of solid partitions. Here are the results for N<=52.
(1)
\begin{align} p_{3}(51) = 11622\ 14153\ 25837\quad \textrm{and}\quad p_3(52) = 19379\ 44766\ 58112 \nonumber \end{align}
  • [Oct 20, 2010] Breakthrough in parallelization: Srivatsan proposes working with an equivalence classes of branches — all branches that belong to an equivalence class have the same subsequent tree. This implies that one run per equivalence class will suffice provided one tracks the multiplicities of equivalence classes. Thanks to this observation, we can now we can go to higher depths. For instance, at depth 12, there are around 28 million branches but only 5172 equivalence classes. This will be incorporated in the next run for N=60 or larger number.
  • [Oct 24, 2010] v3.0 of code has been parallelized at depth 10 with 1542 branches. Testing with N=55. This is a prelude to running the code for N=60 using volunteer computing.
  • [Oct 28, 2010] Srivatsan creates v3.2 of the code. The modification (w.r.t v3.0) is that the dependence on the end-point is removed in the equivalence class. While this is a minor improvement, it is significant as it reduces the the number of equivalence class by a factor of 4 at depth 12 (5172 becomes 1479). It is clear that we can compute the number of solid partitions $\leq 70$ but going beyond this will need a significant improvement.
  • [Nov 3, 2010] v3.0 of the code at depth 10 provides three new numbers. Here are the complete results for N<=55.
(2)
\begin{align} p_3(53) = 32238\ 23655\ 07746\ ,\ p_3(54) = 53505\ 67710\ 14674\ , \ p_3(55) = 88603\ 33844\ 75166 \nonumber \end{align}
  • [Nov 7, 2010] Proved that the number of equivalence classes at a depth $k$ is the same as the number of plane partitions of $k$. This provides a simple formula for the number of equivalence classes at any depth.
  • [Nov 10, 2010] Started the run for N=62 using a parallelized version of the v3.2 at depth 14 (=4167 equivalence classes). Results are expect at the end of December/early January. The code has been tested to reproduce results for N=50. Some of the parts will be done through volunteer computing — the portal is expected to be ready soon.
  • [Dec 1, 2010] Issues with the parallel file system on the supercluster Vega led to its closure for a week. Then, it came back up but errors persisted with some data files not getting written. Things are back on track now and results should be expected by Dec. 10 or latest by Dec. 15. Portal isn't ready yet.
  • [Dec 7, 2010] Testing for N=70 at depth 16 already started with validation of the branching with a N=55 run. No improvements in code since the previous run. Hopefully, it will arrive soon enough! Outlook for obtaining numbers beyond 70 remain bleak unless we come up with improvements.
  • [Dec 9, 2010] All the runs for N=62 have been completed. Here are the complete results for N<=62. Here are the seven new numbers that we have added bringing the total number of additions to twelve.
(3)
\begin{align} p_3(56) = 1 46400 93392 99229\ ,\ p_3(57) = 2 41380 42828 01444\ ,\ p_3(58) = 3 97140 96826 33930\ ,\ p_3(59) = 6 52064 95439 12193\ ,\ \end{align}
(4)
\begin{align} p_3(60) = 10 68461 42257 15559\ ,\ p_3(61) = 17 47294 70062 57293\ ,\ p_3(62) = 28 51869 10933 88854\ .\ \end{align}
  • [Dec 14, 2010] Successfully completed the test run for N=55 at depth 16 using v3.2 of code. Carrying out timing runs for N=70 at the same depth. Naive scaling suggests that some branches will need runs of greater than 10 days. May need to go to higher depth and/or use an openmpi version of the program for the longer runs. Going beyond N=70 remains bleak.
  • [Dec 29, 2010] Generated the 18334 branches for depth 17 — took more than 200 hours of CPU time. Need to start a timing run (as well as branching test) for N=55 before starting the code for N=70. Expect results for N=70 towards the end of February 2011. Naveen has a working version of the Monte Carlo code for 1D partitions.
  • [Jan 3, 2011] N=55 test run at depth 17 has been completed and the branching has been validated. 5000 of the 18334 branches for N=70 have been submitted to the HPCE machine and runs will be done during the last week of January.
  • [Jan 21, 2011] It appears more practical to aim to get the number of solid partitions for N=68 in the current run. The timings of the ongoing run involving the branches with smaller runtimes leads us to this conclusion. So the remaining 13334 branches will be run for N=68. This way the results will be available during April 2011. Future runs will be undertaken after we come up with modifications that might enable us to go beyond N=68.
  • [Feb 22, 2011] Around 10,500 branches have been submitted to Leo/Vega. Of these, 6000+ branches have been completed and the remaining will get done towards the end of March. The remaining 8000+ branches will be start running in the second half of March and will hopefully be done by the end of April.
  • [March 27, 2011] Was travelling for 3 weeks and our supercluster vega was rebooted during that period. Lost about 2-3 weeks of runtime as I have no remote access to vega. Restarted all jobs only early last week. We should now expect to be done only in May. 10,000+ branches have been completed. Another 2,000 branches have been submitted.
  • [April 19, 2011] Yet another stoppage on vega — the vendor has promised to fix the SFS problem for good. Only a handful of jobs are running on the smaller cluster leo for now. About 12500 branches have been completed. Hopefully, the numbers will be out by the end of May. Meanwhile, here are the new set of predictions.
  • [May 19, 2011] About 96% of the jobs are done. 814 branches are to be completed hopefully by the end of this month. The SFS problem in vega appears to have been fixed this time around and things have been going smoothly. A manuscript on the asymptotics of higher dimensional partitions with Srivatsan and Naveen as co-authors is being readied for publication.
  • [June 1, 2011] Runs are completed. Validation of data is going on and results will be available in a day or two. The paper with Naveen and Srivatsan has appeared on the arXiv without the N=68 data. That will be included once the results are ready. Here is a figure which shows how well the asymptotic formula works.
Log3d.jpg
  • [June 1, 2011] Here are six new numbers.
(5)
\begin{align} p_3(63)= 46458506464748807 \ , \ p_3(64)= 75542021868032878\ , \ p_3(65)= 122606799866017598\ , \ \end{align}
(6)
\begin{align} p_3(66)=198635761249922839 \ , \ p_3(67)=321241075686259326 \ , \ p_3(68)= 518619444932991189 \ . \ \end{align}
  • [June 4, 2011] The solid partitions project will be taking a break. The next run will be carried out once we have fresh ideas that lead to further improvement in the code. Meanwhile, the focus will shift to enumerating four and five dimensional partitions by porting v3.2 of the code. The home pages for these projects are: Four Dimensional Partitions || Five Dimensional Partitions.
  • [Feb 2, 2012] Intel India expresses interest in this project and is going to support this project by providing financial support and more importantly will also provide access to computing power as well as technical support. The next run for N=74 is being planned.
  • [Mar 2, 2012] A recent paper by Ekhad and Zeilberger discusses the enumeration of Solid Standard Young Tableaux (SSYT) and provides the enumerates the first thirty terms in the series. In the paper with Naveen and Srivatsan, I defined Almost Topological Sequences (ATS) for a set P generalising a definition of Knuth. In the paper I show that there is a bijection between SSYT and ATS for $P=\mathbb{N}^3$ that are relevant for solid partitions! The numbers of SSYT is the same as the numbers of ATS (a.k.a. number of nodes) at a given depth. We had determined the first 17 numbers (see here) which agree with the list given in page 3 of the EZ paper (nice!). The EZ list is useful. It tells us that at depth 20, there are 63157 41514 45665 nodes(SSYT/ATS) and we know that there are only 75278 runs(plane partitions). Further, Zeilberger discusses a way of generating all ATS' associated with a given plane partition. The next generation of code should utilise this as we can then consider going to depth 20 for N=74.
  • [Mar 30, 2012] We now have code which generates all nodes (i.e. SSYT/ATS) given a shape (plane partition). In principle, now we can generate the data associated with all nodes at depth 20 to be presented for the next round of runs. We are also working on modifying our implementation of the Knuth algorithm to reduce the memory overhead among other things. Hopefully, we should have the new code (v4.0) ready before the end of April 2012.
  • [Apr 5, 2012] I have an implementation of the basic Knuth algorithm with reduced memory footprint. The memory requirement of an earlier version (v2/3) of the Knuth code grew as $N^{d-1}$ for d-dimensional partitions. Now it only grows as $d*N$. However, for solid partitions for which memory is not an issue, v2 of the Knuth code runs faster and will be preferred unless further optimization of the new code changes the runtimes.
  • [May 12, 2012] Srivatsan comes up with the first version of v4.0 of the code. This has to be debugged and checked. This will happen only in July-August 2012 after the summer break.
  • [Sept 11, 2012] We need to make modifications in order to reduce the runtimes for generation of the initial configurations at depth=21 before starting the next run. Things are getting difficult!
  • [Sept 12, 2012] Started generating initial configurations at depth=21. I anticipate that it will take six weeks (not a typo!) to generate all of them. One can then begin a test run at N=60 to validate the initial configurations and that will take around a month. I have a problem with the availability of CPU's at the IITM supercluster — it is restricted to 16 at this point in time. IITM's new cluster is yet to be installed by the vendor even though the machine has been delivered sometime ago. I hope things ease up after that installation!
  • [Sept 15, 2012] I have finally submitted all jobs necessary for creating all initial configurations at depth=21. I hope to have the initial data ready before the end of October. Validation of the code will be done simultaneously depending on the availability of computer time.
  • [Sept 26, 2012] Created new code to generate all ATS' given a particular shape and symmetry — this is based on the Knuth algorithm run in reverse and turns out to be faster than the one use the BM algorithm. Hopefully, the generation of initial data will happen faster. There will also be a break in a day or so when IITM's supercluster vega will be brought down and replaced by the new supercluster virgo. Virgo should be ready sometime during the first week of October.
  • [Oct 16, 2012] The new IITM cluster Virgo is released to users. The generations of branches is not yet complete (around 50-60% done) and the validation of branches will be carried out by reproducing numbers of solid partitions of N=56. The generation of branches will be completed by the end of October and validation done by early November. The N=56 runs will also be used to estimate runtimes for N=72. We have simultaneously begun some of the N=72 runs. So barring hiccups on the new supercluster, the next run will be in full swing by December 2012.
  • [Nov 4, 2012] The generation of all branches is complete. Validation of branching has been carried out by reproducing numbers of solid partitions of N=56. This will be soon extended to N=60. An interesting issue arose during validation — it appeared that it would take an inordinate amount to time to generate the numbers due to missing configurations. This was done by using a depth 12 run with a simple modification to the post code.
  • [Nov 5, 2012] Validation of branching carried out at N=60. This is an important check on the pre part of the code. We have 20,024(=19574+450) branches when organized by symmetry instead of 118,794(=19574*6+450*3) branches. The N=72 run is about 25% complete and we hope to have the new numbers before the end of the year.
  • [Nov 11, 2012] 45% of the runs are completed. There is still hope that the results will be available by the end of the year as usage on the new IITM supercluster increases and our access hence diminishes.
  • [Dec 6, 2012] 80% of the runs have been completed. If things go on schedule, we should expect results by Christmas. Details of the current run are kept here. You will also find estimates obtained using Monte Carlo simulations.
  • [Dec 26, 2012] Following a planned maintenance shutdown from Dec. 19-21, the super cluster virgo has been limping with filesystem issues that system engineers have been trying to sort out. This has delayed the completion of the current run which should have happened around this time. It is truly frustrating as 99% of the runs have been completed. If there are no further issues, we should be able to complete the run before the end of the year passing the targeted number of solid partitions of 70.
  • [Dec 30, 2012] The good news is that 99.9% of the runs have been completed. The bad news is that virgo has been down for the past two days with only 26 branches to go. We need about 30 hours of continuous up time on virgo to finish.
  • [Jan 3, 2013] I finally managed to complete the run and we have four new numbers.
(7)
\begin{align} p_3(69) = 835840682363211461 \ ,\ p_3(70) = 1344830884987012679\ , \end{align}
(8)
\begin{align} p_3(71) = 2160197135019313694\ , \ p_3(72) = 3464274974065172792 \end{align}

Programming aspects

People who are working on this project:

  • B. Srivatsan (who has written the first working program and improved on it)
  • Suresh Govindarajan (project coordinator/chief tester)

Things to do:

  • Create code that dynamically generates all nodes associated with a given shape and prepares the data for the run. done As mentioned in the Sept 11, 2012 note, we need to make the pre-code more efficient. Testing is on. Done
  • Change how data is stored — right now there is an array of size $2^N$ use to store at most $N$ non-zero entries. It makes sense to just store those locations (in the array) which have non-zero entries. Done
  • Create mpi version of the code that automatically generates the sub-tree to some additional depth and creates parallel runs that are automatically given to the nodes for further processing. (should be ready by the end of the year for the next run which will target N=80, if possible, starting from depth 21).

.

Immediate Goals:

  1. Reproduce numbers of solid partitions $\leq 52$ by the end of —September October 2010.— Done!
  2. Set up a portal for distributing the computation over several computers by several users by December 2010. This is occasionally called volunteer computing. Eventually, we will set up a BOINC server. postponed for future runs.
  3. Start the next run ($N\ge 60$) by December 2010. Done!
  4. Started the next run ($N\leq 68$) during January 2011. Done
  5. Generate v4.0 of the code which incorporates the Mar 2, 2012 note. more or less done
  6. Start the next run ($N\leq 72$) during May September October 2012. Note the delays!

Page maintained by Suresh Govindarajan

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License