- Author
- John Perry, john..nosp@m.perr.nosp@m.y@usm.nosp@m..edu
- Date
- 2014-present
- Copyright
- GNU Public License; see Copyright details below
Overview
This project is designed to be a testbed/reference implementation for dynamic Gröbner basis computation, using the algorithms described in [4] and [3], along with some newer ideas.
No claim to high efficiency or exemplary programming is implied. I wrote this to be relatively usable (compared to an earlier Sage implementation) and easy to modify, especially as regards modularity, polymorphism, and getting detailed data.
Installation and dependencies
For this program, a simple make BUILD=build
should do. However, there are several prerequisite programs you need to install first:
- a C++ compiler that understands C++11;
- GMP, the Gnu Multi-Precision library [5];
- GLPK, the Gnu Linear Programming Kit [8];
- PPL, the Parma Polyhedra Library [1].
The codebase also contains some unrelated forays into parallel programming, which are probably of no interest to the average user, but require:
- the OpenMPI toolchain. If you do not have OpenMPI and are not interested in these (and I don’t see why you should be) just comment out any line that involves
mpicxx
.
On a Linux system you can install these very easily (in Fedora I used Apper
). All dependencies should build without difficulty on a Macintosh. I have not tried to build on Windows, but I don’t use hardware magic, so it ought to build.
Usage
There are two ways to use the system.
- Write a program that builds a polynomial system and accesses the library directly. Most of the examples provided illustrate that approach. This is annoying and a rather difficult, though not as difficult as it was before the Indeterminate class.
- Use the
user_interface
file. A number of systems are defined in the directory examples_for_user_interface
. The format is defined in the documentation to user_interface(). This is still annoying, but not quite so difficult.
Current status
As of January 2017:
- The code works consistently on many different examples. However, it is slow: I am not trying to reach Singular-level optimization (not at the current time, anyway). Typically, this code is one to two orders of magnitude slower than Singular.
- Nevertheless, it outperforms Singular on at least one system: Caboara’s Example 2.
- Unless I’m doing something very stupid, the weighted sugar strategy is an unmitigated disaster and should be avoided.
- The code is slow, though in the last week of July the implementation of an \(O(1)\) memory manager cut the time required for the dynamic implementation by nearly 2/3. A very simple optimization of assigning an object’s array to a local variable before entering a loop cut the time required for both dynamic and static by roughly 40%.
- Note
- Exponent-packing is currently turned off, since it doesn’t seem to help enough to make it worth the trouble.
- Warning
- I implemented the exponent-packing in rather shoddy fashion: the first 8 exponents are cast to
uint8_t
, then shifted appropriately. Comparisons for other are made explicitly. Trouble will arise when one of the first 8 exponents exceeds \(2^8-1\), though practically that hasn’t been a problem until now. This may be easy to fix: if the exponent comparison passes, test all the variables explicitly, not just those after the first 8. But there are issues with arithmetic operations that could arise, as well; the multiplication and division operators are implemented semi-intelligently; that is, intelligently under the assumption that the exponents are valid. So problems could arise there even in the case where we fix the equality comparison.
To-do list
In no particular order, aside from the indicated priority. See the to-do page for a full list (I may have missed some things).
Higher priority
- Todo:
- These are the highest priority items: things I think are needed before I’d call it “ready.”
- Organize files into directories.
- General improvements to efficiency based on profiling. (ongoing)
- Implement simplex solver as oracle for DDM, compare with DDM (idea due to D. Lichtblau).
- Optimize length() in Polynomial_Linked_List.
- Add Fukuda and Prodon’s cdd as an LP_Solver. [6]
- Bring polynomial iterators in line with C++ convention.
- Implement other C++11 modernizations (
auto
, noexcept
, override
, …).
- Generalize/improve the memory manager.
- Add PPL as an LP_Solver. [1]
- Implement Caboara’s examples.
- Implement graded Hilbert numerators.
- Implement or link to a simplex solver, compare with DDM.
- Determine what's wrong with the \(4\times4\) system. (Turns out nothing was wrong: the system is simply not amenable to polyhedra.)
- Implement a global analysis at the beginning of the algorithm.
- Implement Hilbert polynomials using multiple-precision arithmetic. (Denominators get too large for
long long
!!!)
Medium priority
- Todo:
- These items would be nice, but aren’t a big deal for me at present.
- Improve rings and fields:
- Create a general
Ring
class.
- Build polynomial rings off rings, not off fields. This could be difficult, since we typically want polynomials to have invertible coefficients. It doesn’t seem strictly necessary, though: S-polynomials and top-reductions, for instance, can be computed by multiplying by the leading coefficient of the other polynomial, rather than by dividing by one’s own coefficient.
- Implement Dense_Univariate_Integer_Polynomial as a proper
Polynomial
representation.
- Create a general
Euclidean_Ring
class. Add to it the divide_by_common_term() function.
- Create a general
Field
class.
- Reimplement Double_Buffered_Polynomial so its arrays contain pointers to Monomial, rather than an expanded Monomial. See if that changes things.
- Re-examine what’s going on with masks, since the plus to efficiency doesn’t seem worth the effort.
- Implement marked polynomials with a dynamic algorithm that works practically in the grevlex order, with the marked term being the true leading monomial. This may be very inefficient to reduce.
- Implement a
Dictionary_Linked_Polynomial
class, where any term points to one unique instance of a monomial, rather than having many copies of monomials in different polynomials. Upside is that equality test during canonicalization is instantaneous (compare pointers). Downsides may include finding/sorting the monomials, indirection.
- Detach monomial ordering from monomials, since caching ordering data doesn’t seem to help much?
- Implement a
Polynomial_Builder
class to help build polynomials more easily by reading from an input file. That way we don’t have to write a fresh control program for each example system. (see user_interface())
- Implement an Indeterminate class and a
Polynomial_Builder
class to help build polynomials more easily.
Lower priority
- Todo:
- I’m not sure these are worth doing.
- Most skeleton code seems to have little overhead, so most “improvements” related to that falls here:
- Implement DDM with the Fukuda-Prodon criterion, compare to Zolotykh’s.
- Implement Roune’s algorithms for Hilbert functions.
- Compare each potential PP with all other potential PP’s, reducing the number of false positives. [This does not seem to be necessary at the moment, as the overhead is quite small, but it is still a thought.]
- Add a hash mechanism to the
constraint
class to help avoid redundnacy.
- Create a Matrix_Ordering_Data class as a subset of Monomial_Order_Data.
- Add an
insert()
function to Monomial_Node to insert another Polynomial_Linked_List, subsequently to be destroyed.
- Think about computing all inverses of a small prime field immediately at startup.
- Test matrix orderings more thoroughly.
Apologia pro labora sua
This probably has bugs I haven’t yet worked out, but I have done a lot of bug-fixing, including the use of valgrind
to identify and repair memory leaks.
I originally implemented this in Sage. That was purely a proof-of-concept product; it was very slow, and I wanted to improve on it. Unfortunately:
- It is not easy to use the guts of Singular from Sage. In particular, the geobuckets. But even if that were possible…
- In all the computer algebra systems I’ve looked at, a monomial ordering is part of the ring structure. At least in Singular, a “wgrevlex” ordering received a different structure than a “grevlex” ordering, in particular a disadvantageous structure. So my preliminary implementation in Singular worked, but tended to be a lot slower than
std()
even though it did less work. In addition, the implementation crashed often, for reasons I wasn’t able to sort out, even with help from the developers.
That is when I decided to develop this code. As it turns out, that was a good thing, because the original Sage version had a number of bugs that I discovered only while developing later versions.
Although I wrote it from scratch, without a doubt it reflects what I saw in CoCoA and Singular. No great claim to originality or even usability is implied. The intent of this software is not to compete with those, but to provide a more robust launchpad to implement the algorithm there than I had before.
Copyright details
This file is part of DynGB.
DynGB is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version.
Foobar is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with DynGB. If not, see http://www.gnu.org/licenses/.