The following command will revert all local uncommitted changes with git:
git checkout .
Given a version number MAJOR.MINOR.PATCH, increment the:
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
In the world of software management there exists a dread place called “dependency hell.” The bigger your system grows and the more packages you integrate into your software, the more likely you are to find yourself, one day, in this pit of despair.
In systems with many dependencies, releasing new package versions can quickly become a nightmare. If the dependency specifications are too tight, you are in danger of version lock (the inability to upgrade a package without having to release new versions of every dependent package). If dependencies are specified too loosely, you will inevitably be bitten by version promiscuity (assuming compatibility with more future versions than is reasonable). Dependency hell is where you are when version lock and/or version promiscuity prevent you from easily and safely moving your project forward.
As a solution to this problem, I propose a simple set of rules and requirements that dictate how version numbers are assigned and incremented. These rules are based on but not necessarily limited to pre-existing widespread common practices in use in both closed and open-source software. For this system to work, you first need to declare a public API. This may consist of documentation or be enforced by the code itself. Regardless, it is important that this API be clear and precise. Once you identify your public API, you communicate changes to it with specific increments to your version number. Consider a version format of X.Y.Z (Major.Minor.Patch). Bug fixes not affecting the API increment the patch version, backwards compatible API additions/changes increment the minor version, and backwards incompatible API changes increment the major version.
I call this system “Semantic Versioning.” Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next.
The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”,
“SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be
interpreted as described in RFC 2119.
This is not a new or revolutionary idea. In fact, you probably do something close to this already. The problem is that “close” isn’t good enough. Without compliance to some sort of formal specification, version numbers are essentially useless for dependency management. By giving a name and clear definition to the above ideas, it becomes easy to communicate your intentions to the users of your software. Once these intentions are clear, flexible (but not too flexible) dependency specifications can finally be made.
A simple example will demonstrate how Semantic Versioning can make dependency hell a thing of the past. Consider a library called “Firetruck.” It requires a Semantically Versioned package named “Ladder.” At the time that Firetruck is created, Ladder is at version 3.1.0. Since Firetruck uses some functionality that was first introduced in 3.1.0, you can safely specify the Ladder dependency as greater than or equal to 3.1.0 but less than 4.0.0. Now, when Ladder version 3.1.1 and 3.2.0 become available, you can release them to your package management system and know that they will be compatible with existing dependent software.
As a responsible developer you will, of course, want to verify that any package upgrades function as advertised. The real world is a messy place;
there’s nothing we can do about that but be vigilant. What you can do is let Semantic Versioning provide you with a sane way to release and upgrade
packages without having to roll new versions of dependent packages, saving you time and hassle.
If all of this sounds desirable, all you need to do to start using Semantic Versioning is to declare that you are doing so and then follow the rules. Link
to this website from your README so others know the rules and can benefit from them.
Make FFT timing measurement is intended to reflect the common case where many FFTs of the same size, indeed of the same array, are require. Thus, break the measurement into two parts:
The mflops is a scaled version of the performance speed
mflops = 5 N log2(N) / (time for one FFT in microseconds)
mflops = 2.5 N log2(N) / (time for one FFT in microseconds)
Where N is number of data points (the product of the FFT dimensions).
This is not an actual flop count; it is simply a convenient scaling, based on the fact that the
radix-2 Cooley-Tukey algorithm asymptotically requires 5 N log2(N) floating-point operations. It allows us to compare the performance for many different sizes on the same graph, get a sense of the cache effects, and provide a rough measure of “efficiency” relative to the clock speed.
Plot either the forward or the backward transform speed, whichever is faster “on average” and in order to keep the graphs readable, plot the direction with the “fastest” results.
To quantify the “average” speed of a FFT routine, and also to reorder the plot legend for improved readability, we define a plot rank for each FFT as follows. First, for each transform size in a plot, compute:
rank = (mflops for FFT) / (mflops for astest FFT for that size).
The plot rank of a given FFT is defined as the median of its ranks for all sizes in the plot. Note: The
plot rank should not be interpreted as an absolute measure of performance.
In the raw benchmark data output, the speed for all routines, for
both forward and backward transforms, is collected in the file
host.speed in the space-delimited format:
name-of-code transform-type transform-size mflops time setup-time
where the times are in seconds.
transform-type is a four-character string consisting of precision (double/single =
s), type (complex/real =
o), and forward/backward (=
b). For example, transform-type =
dcif denotes a double-precision in-place forward transform of complex data.
A minor error in algebra can be detected because it will often result in an equation which is dimensionally incorrect. Therefore dimensional analysis is used to checking the correctness of a derived equation. Most physical quantities can be expressed in terms of combinations of five basic dimensions:
Electrical current (I)
These five dimensions have been chosen as being basic because they are easy to measure in experiments.
The dimensions of speed are length divided by time, or simply L/T.
The dimensions of area are L × L=L2 since area can always be calculated as a length times a length.
To quickly check where ruby headers are installed on a machine, open a terminal and run the following:
ruby -e 'puts $:.join("\n")'
Reference: SWIG and Ruby