articles+ search results
13 articles+ results
1  13
Number of results to display per page

Alves, P.R.L., Duarte, L.G.S., and da Mota, L.A.C.P.
Computer Physics Communications . Mar2020, Vol. 248, pN.PAGN.PAG. 1p.
 Subjects

TIME series analysis, REGRESSION analysis, GRAPHICAL user interfaces, GOODNESSoffit tests, CHIsquared test, PHASE space, UNIFORM Resource Locators, and PROPHECY
 Abstract

In the reconstruction scheme, the results for predictions from chaotic time series are accurate. This work introduces the chisquare test for goodnessoffit and the statistic Rsquared in this scenario. These new features facilitate the choice between different predictors and improve the predictive capacity of the LinMapTS package. Program Title: LinMapTS Program Files doi: http://dx.doi.org/10.17632/pnhy9zymrp.2 Licensing provisions : GPLv3 Programming language: Maple 17 Journal reference of previous version: P R.L. Alves, L.G.S. Duarte, L.A.C.P. da Mota, Comput. Phys. Commun. 215 (2017) 265–268 Does the new version supersede the previous version?: Yes Reasons for the new version: The global fitting captures the underlying dynamics from a time series. It is quite convenient to test if a map is proper to describe the time evolutions of an observable or not. The introduction of the chisquare test for goodnessoffit in the reconstruction scheme attends this demand [1]. On the other hand, a satisfactory global map may be more accurate than another. The statistic Rsquared is a powerful aid to one's choosing the best predictor. [2]. Summary of revisions: If the command LinGfiTS has the optional argument Analysis=1, then the output now presents both results of the test and the statistic Rsquared for the global fitting. To illustrate the computational implementation of these new features, we revisit some predictions of chaotic time series with the LinMapTS package [3,4]. The commands below are sufficient to reconstruct the state vectors and to forecast of the dynamical variable of order 723 (X P) if one analyzes the time series – for a Lorenz System – stored in the file ts37.txt. In the next step, this communication presents the commands for the analysis of a time series of the realworld – a chaotic circuit [5,3,6] – and outputs of the new features and results for different polynomial predictors. Fig. 1 presents wellsucceed global fittings for the Lorenz System (see Fig. 1(a)) and the chaotic voltage in a fourdimensional reconstruct phase space (see Fig. 1(b)). In both study cases, the most accurate forecasts correspond to the largest Rsquared between all polynomial predictors (see Tables 1 and 2). So these applications suggest that the new features presented in this communication may improve the forecast capability significantly. Nature of problem: Time series analysis and improving forecast capability. Solution method: The method of solution is published in [3]. Additional comments including restrictions and unusual features: Depending on the data inputted, the chisquare test may not run properly. In this case, one must introduce the optional argument ChiSquare=0 to perform the graphical analysis and compute the statistic Rsquared. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgments L.G.S. Duarte and L.A.C.P. da Mota wish to thank Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) – Registro n.o 8182/UERJ/2013 and Deliberacão n.o 25/2013 – for the Research Grant. References: [1] D. Sheskin, Handbook of parametric and nonparametric statistical procedures, Chapman & Hall/CRC, Boca Raton, 2011 (2011). [2] N. Draper, Applied regression analysis, Wiley, New York, 1998 (1998). [3] P. Alves, L. Duarte, L. da Mota, Improvement in global forecast for chaotic time series, Computer Physics Communications 207 (2016) 325 – 340 (2016). https://doi.org/10.1016/j.cpc.2016.05.011. URL http://www.sciencedirect.com/science/article/pii/S0010465516301278 [4] P. Alves, L. Duarte, L. da Mota, Alternative predictors in chaotic time series, Computer Physics Communications 215 (2017) 265 – 268 (2017). https://doi.org/10.1016/j.cpc.2017.02.013. URL http://www.sciencedirect.com/science/article/pii/S0010465517300656 [5] P. Alves, L. Duarte, L. da Mota, A new characterization of chaos from a time series, Chaos, Solitons and Fractals 104 (2017) 323 – 326 (2017). https://doi.org/10.1016/j.chaos.2017.08.033. URL http://www.sciencedirect.com/science/article/pii/S096007791730365X [6] P. McSharry, Nonlinear dynamics and chaos workshop ((accessed 09.08.2014)). URL http://people.maths.ox.ac.uk/mcsharry/lectures/ndc/ndcworkshop.shtml • Predicting chaotic time series gives accurate results in the reconstruction scheme. • The ease and simplicity of the Maple environment facilitate time series analysis. • New features increase the predictive capacity of the LinMapTS package. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Alves, P.R.L., Duarte, L.G.S., and da Mota, L.A.C.P.
Computer Physics Communications . Jun2017, Vol. 215, p265268. 4p.
 Subjects

TURBULENCE, PROGRAMMING languages, GNU General Public License (Free software license), OPEN source software, and SCALAR field theory
 Abstract

In the scheme of reconstruction, nonpolynomial predictors improve the forecast from chaotic time series. The algebraic manipulation in the Maple environment is the basis for obtaining of accurate predictors. Beyond the different times of prediction, the optional arguments of the computational routines optimize the running and the analysis of global mappings. New version program summary Program Title: LinMapTS Program Files doi: http://dx.doi.org/10.17632/pnhy9zymrp.1 Licensing provisions: GNU General Public License version 3 Programming language: Maple17 Journal reference of previous version: Comput. Phys. Comm. 207 (2016) 325 Does the new version supersede the previous version?: Yes Nature of problem: Time series analysis and improving forecast capability. Solution method: The method of solution is published in [1]. Restrictions: The routines employ the global variables { a i , b , X i } ; If more than 2000 vectors are employed in the global mapping the normality test is not applicable. Unusual features: The algebraic manipulation of the predictors improves the global forecast. Reasons for the new version: In the reconstruction’s scheme [2], the predictor in the global approach has a standard form [3]. From a time series { X ( 0 Δ t ) , X ( 1 Δ t ) ⋯ , X ( ( S − 1 ) Δ t ) } with S scalar quantities X ( t ) and a time interval Δ t , a state vector in the N dimensional reconstructed phase space is (1)  x ( t ) 〉 ≐ [ X ( ( N − 1 ) T Δ t ) ⋮ X ( T Δ t ) X ( 0 ) ] . The time delay T is a parameter for the choice of the observables X ( ( N − k ) T Δ t ) ( k is an integer) that are available in the time series [4]. The predictors P (  x r 〉 ) in the routine LinGfiTS of the package LinMapTS are linear combinations of the adjustment parameters. (2) P (  x 〉 ) = ∑ i = 1 m a i φ i (  x 〉 ) . Because this restricted form, the m dimensional vector of parameters  a 〉 is a computational solution of a matrix equation. So the computational procedure requires small runtimes for the least squares minimization [1]. The routine generates only polynomial global maps, i.e. the functions φ (  x 〉 ) can assume forms–in a reconstructed phase space with variables ( X 1 , X 2 , X 3 ) –like φ 1 = X 1 2 , φ 2 = X 1 3 X 2 2 X 3 and so on. The predictors do not admit terms such as sin ( X 1 X 2 X 3 ) or ln ( 1 + 1 X 1 X 2 X 3 ) . A prediction–denoted by X 1 P –is the result of the application of the global map (3) X 1 P = P (  x P − 1 〉 ) , where P is the order of the last known observable in the time series. The principal focus in this new version is to extend the permissible functional forms for the global mappings. If nonpolynomial terms take part in the predictors, the accuracy of the forecast can be improved. Here, the purpose is to offer the researcher best features when he intends to increase the predictions’ power from a chaotic time series . Another desired extension refers to the time of prediction. With the integer parameter τ , the future instant is given by t + τ Δ t . We have been presented this idea in the new version of the package TimeS [5]. But the runtime for generating polynomial maps is greater than the computational procedure LinGfiTS [1]. In this work, we apply 1 → τ in Eq. (3) . Then it is rewritten as (4) X 1 P = P (  x P − τ 〉 ) . Summary of revisions: New optional arguments enable the selection of functional forms with different prediction times in the method of forecasting; Instructions to use the LinMapTS package (README.pdf), computational routines (LinMapTS.txt) and test file (LinMapTS.mw). In order to optimize the running of the programs LinGfiTS and ConfiTS , the arguments have now been rearranged. The current routines own optional arguments. Some of them are indispensable in the previous version. Nowadays, the command LinGfiTS requires only two arguments. The first is a list of reconstructed vectors–assigned as V –and the second is the order of the last vector present in the global mapping–assigned as f i n a l . The input necessary for the running of the procedure ConfiTS must have, in the following order: the global map–assigned as m a p –, the list V and the integer f i n a l . Below, we describe all arguments that take part in the new version of the package LinMapTS . • • Required arguments: – List of reconstructed vectors—assigned as V in this paper. – The vector that has, as its first component, the last known value of the time series—assigned as f i n a l in this paper. • Optional arguments: – Degree = . This argument specifies the degree of the polynomial predictor. The default is 2. – Func = . This argument specifies the predictor for the global mapping. – Level . This argument selects the interval of the time series for the global mapping. The default is 5. – PT = . This argument specifies the value of the parameter τ . The default is 1. – Analysis = 1 . This argument is the necessary input for a graphical analysis of the global fitting. • • Required arguments: – The global map—assigned as m a p in this paper. – List of reconstructed vectors—assigned as V in this paper. – The vector that has, as its first component, the last known value of the time series—assigned as f i n a l in this paper. • Optional arguments: – Level . This argument selects the interval of the time series for the global mapping. The default is 5. – PT = . This argument specifies the value of the parameter τ . The default is 1. – Analysis = 1 . This argument is the necessary input for a graphical analysis of the residuals’ distribution and the applying of the normality test. (5) σ τ = ∑ j = 1 M ( X 1 j − ∑ i = 1 m a i φ i (  x j − τ 〉 ) ) 2 M − 1 . The outputs remain unchanged from the original programs [1]. The routine LinGfiTS makes available a global map, whereas the procedure ConfiTS returns the expected deviation σ τ in the forecast. However, this statistical quantity now incorporates the parameter τ . Its formula (5) employs the M reconstructed vectors which take part in global fitting. As a first example of using the commands, the time parameter selected is τ = 2 . So it is necessary to include the optional argument PT=2 in the Maple prompt. Here, the file ts37.txt stores a chaotic time series for a Lorenz System . The procedure VecTS –of the TimeS package–reconstructs the state vectors. Our code still attaches all routines of this package in the present version [1,5]. This paper does not show the outputs of the computational procedures. We invite the reader to run the Maple worksheet from the additional file LinMapTS.mw . If the selection Analysis=1 is included in the commands, the routines perform analytical tasks. A graphic with the calculated and actual values of the observables provides a monitoring of the global fitting performed by the program LinGfiTS . In the command ConfiTS , the previous argument displays an analytical histogram beyond the results of Shapiro–Wilk test for the residuals ϵ j − τ in the global mapping. When the normality test gives a positive result, then the confidence level for a prediction time L τ is wellestablished [1]. For a prediction error ϵ ́ j + τ , this magnitude now is given by (6) L τ = 1 − 2 2 π ∫ ϵ ́ j + τ / σ τ ∞ exp { − ϵ j − τ 2 2 σ τ 2 } d ( ϵ j − τ / σ τ ) . (7) P pol (  x 〉 ) = a 1 X 1 + a 2 X 2 + a 3 X 3 + a 4 X 1 2 + a 5 X 1 X 2 + a 6 X 1 X 3 + a 7 X 2 2 + a 8 X 2 X 3 + a 9 X 3 2 (8) P pot (  x 〉 ) = a 1 X 1  X 1   X 1  0.9 + a 2 X 1  X 1   X 1  1.1 + a 3 X 2  X 2   X 2  0.9 + a 4 X 2  X 2   X 2  1.1 + a 5 X 3  X 3   X 3  0.9 + a 6 X 3  X 3   X 3  1.1 + a 7 X 1 X 2 X 3  X 1 X 2 X 3   X 1 X 2 X 3  0.9 + a 8 X 1 X 2 X 3  X 1 X 2 X 3   X 1 X 2 X 3  1.1 (9) P log (  x r 〉 ) = a 1 X 1 + a 2 X 2 + a 3 X 3 + a 4 ln ( 1 + 1 10 cos ( X 1 ) ) + a 5 ln ( 1 + 1 10 cos ( X 2 ) ) + a 6 ln ( 1 + 1 10 cos ( X 3 ) ) + a 7 ln ( 1 + 1 10 sin ( X 1 ) ) + a 8 ln ( 1 + 1 10 sin ( X 2 ) ) + a 9 ln ( 1 + 1 10 sin ( X 3 ) ) . A global mapping for the entire time series presented in previous paragraphs has been explored in the first paper about the package LinMapTS [1]. We revisited this application with the inclusion of the optional argument Analysis=1 . The predictor (10) P (  x r 〉 ) = a 1 X 1 + a 2 X 2 + a 3 X 3 + ⋯ + a 55 X 3 5 is automatically generated by the LinMapTS with the optional argument Degree=5 . The choice Level=39 assures that the interval for global mapping covers the whole series. The commands in this case are Fig. 1 shows the graphical analysis which the routines of the LinMapTS package make available from the commands above. The plotting in Fig. 1 (a) assures that the global fitting has highquality. It is according to the accurate forecast presented in the previous version of the package [1]. The histogram–conjugated with the standard normal curve in Fig. 1 (b)–is compatible with the positive result of the Shapiro–Wilk test for the residuals’ distribution. With the nonpolynomial predictors P pot (8) –assigned as F [ 2 ] –and P log (9) –assigned as F [ 3 ] –we got forecasts more accurate than the polynomial form P pol (7) –assigned as F [ 1 ] . The commands employed the optional argument Func . They have the structure shown below. Table 1 presents results of the two routines for the same time series studied in the previous example. The observable of interest in the prediction is V[723][1] . Although the alternative global mappings have required greater runtime than the polynomial function, the time measures show that the computational cost remains small. From the errors  ϵ ́ j + τ  the most accurate predictor is P log , followed by P pot . In this case study, the polynomial function P pol gives the larger error. So, the alternative functional forms had implied increasing the accuracy in forecasting. Another point in favor of the method is the comparison between the actual errors  ϵ ́ j + τ  and 3 σ τ . For the alternative predictors, the inequality  ϵ ́ j + τ  < 3 σ τ corresponds to the positive result in the normality test. However, the polynomial mapping did not permit to establish a credible confidence level L τ (6) in the prediction. From this application and many others to list, one concludes that this update represents a substantial improvement in the predictive capacity of the LinMapTS package. Acknowledgments L.G.S. Duarte and L.A.C.P. da Mota wish to thank Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ) for the Research Grant. [1] P. Alves, L. Duarte, L. da Mota, Computer Physics Communications 207 (2016) 325–340. [2] F. Takens, Detecting strange attractors in turbulence, in: D. Rand, L.S. Young (Eds.), Dynamical Systems and Turbulence, Warwick 1980, Vol. 898 of Lecture Notes in Mathematics, Springer Berlin Heidelberg, Berlin, 1981, pp. 366–381. [3] M. Casdagli, Physica D: Nonlinear Phenomena 35 (3) (1989) 335–356. [4] D. Ruelle, Chaotic evolution and strange attractors: the statistical analysis of time series for deterministic nonlinear systems, Cambridge University Press, Cambridge New York, 1989. [5] P. Alves, L. Duarte, L. da Mota, Computer Physics Communications 207 (2016) 539–541. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Sarkadi, L.
Computer Physics Communications . Mar2017, Vol. 212, p280282. 3p.
 Subjects

BOUNDfree transitions, MOLECULAR interactions, BESSEL functions, RADIAL wavefunction, and PROGRAMMING languages
 Abstract

The program MTRXCOUL [1] calculates the matrix elements of the Coulomb interaction between a charged particle and an atomic electron, ∫ ψ f ∗ ( r )  R − r  − 1 ψ i ( r ) d r . Boundfree transitions are considered, and nonrelativistic hydrogenic wave functions are used. In this revised version a bug discovered in the F3Y CPC Program Library (PL) subprogram [2] is fixed. Furthermore, the COULCC CPC PL subprogram [3] applied for the calculations of the radial wave functions of the free states and the Bessel functions is replaced by the CPC PL subprogram DCOUL [4]. New version program summary Program Title: MTRXCOUL Program Files doi: http://dx.doi.org/10.17632/xyg9zrmzz2.1 Licensing provisions : GNU GPL v3 Programming language: Fortran 77 Journal reference of previous version: Comput. Phys. Commun. 133 (2000) 119. Does the new version supersede the previous version?: Yes Reasons for the new version: 1. In some applications MTRXCOUL led to unexpected results that were traced back to the erroneous execution of the subprogram F3Y [2]. For example, in some cases F3Y yielded completely different values for the inputs ( l , m 1 , l , m 2 , l ′ , m ′ ) and ( l , m 2 , l , m 1 , l ′ , m ′ ) , while, for symmetry reason, one expects equal results. In the new version this error in F3Y was corrected. 2. In MTRXCOUL the COULCC subprogram [3] is applied for the calculations of the radial wave functions of the free states R E , l ( r ) and the Bessel functions J n ( x ) . Since the publication of MTRXCOUL a relativistic version of the program, MTRDCOUL has also been developed and published [5]. In MTRDCOUL R E , l ( r ) and J n ( x ) are calculated by the subprogram DCOUL written by Salvat et al. [4]. Since the latter program is suitable also for calculations of nonrelativistic wave functions, to ensure consistency between MTRXCOUL and MTRDCOUL, in the revised program the COULCC [3] was replaced by DCOUL. Furthermore, in some applications DCOUL turned out to be more efficient than COULCC. Summary of revisions: 1. In the line AAQQ0036 of the original F3Y program [2] the incorrect assignment A 2 = L 1 − M 1 − N 2 was replaced by A 2 = L 1 − M 1 − N 1 . The corrected F3Y was tested by comparing its results with those obtained by a program written for the integral of the product of three spherical harmonics using the 369j program [6]. An agreement within 1 0 − 14 was found between the two calculations for all possible arguments of the function belonging to the values of l i up to 10. 2. In the RFINAL function of the revised MTRXCOUL the regular Coulomb function F l ( η , x ) is calculated using the code DCOUL [4]. For E ≠ 0 F l ( η , x ) is obtained using the SCOUL subroutine. For E = 0 R E , l ( r ) is expressed in terms of the Bessel function J n ( x ) . The latter is obtained calling the FCOUL subroutine of DCOUL in a separate program unit, BESSJ(N,X). Nature of problem: The theoretical description of the excitation and ionization of atoms by charged particle impact often requires the knowledge of the matrix elements of the Coulomb interaction. Considering that the program can easily be extended to the calculations of matrix elements between wave functions other than the hydrogenic ones, it may find a broad application including the treatment of the electron–electron correlation problems. Solution method: The algorithm is based on the multipole series expansion of the Coulomb potential. Additional comments including Restrictions and Unusual features: The matrix elements can be calculated with the following restrictions. The initial bound states are limited to 1s, 2s, 2p, 3s, 3p, 3d. The quantum number l in the final state has a maximum value of 10. Acknowledgments This work was supported by the National Scientific Research Foundation (OTKA, Grant No. K109440). [1] L. Sarkadi, Comput. Phys. Commun. 133 (2000) 119. [2] A. Liberato de Brito, Comput. Phys. Commun. 25 (1982) 81. [3] I.J. Thompson and A.R. Barnett, Comput. Phys. Commun. 36 (1985) 363. [4] F. Salvat, J.M. FernándezVarea and W. Williamson Jr., Comput. Phys. Commun. 90 (1995) 151. [5] L. Lugosi and L. Sarkadi, Comput. Phys. Commun. 141 (2001) 73. [6] L. Wei, Comput. Phys. Commun. 120 (1999) 222; Erratum: 182 (2011) 1199. Appendix TEST RUN OUTPUT [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

MuñozSantiburcio, Daniel and HernándezLaguna, Alfonso
Computer Physics Communications . Aug2017, Vol. 217, p212214. 3p.
 Subjects

CODING theory, GROUP velocity dispersion, ACOUSTIC wave effects, PARAMETER estimation, PROGRAMMING languages, PHASE transformations (Physics), and SURFACES (Physics)
 Abstract

We present an improved version of the code AWESoMe , capable of computing phase and group velocities, power flow angles and enhancement factors of acoustic waves in homogeneous solids. In this version, some algorithms are improved and the code provides a better estimation of the enhancement factor compared to the previous version. In addition, we include a quadrupleprecision version of the code, which even though using the same numerical approach as the doubleprecision version, is able to calculate the exact values of the enhancement factor. The standard, doubleprecision version of the code has been interfaced and merged with the development version of CRYSTAL and will be available as part of its next stable release. Finally, we have improved the scripts for visualizing the results, which now are compatible with Gnuplot 5.X.X, including new scripts for the visualization of the normal and ray surfaces. New version program summary Program Title: AWESoMe 1.1 Program Files doi: http://dx.doi.org/10.17632/fr58gfsc9n.1 Licensing provisions: GPLv3 Programming language: Fortran90 Journal reference of previous version: Computer Physics Communications 192 (2015) 272–277 Does the new version supersede the previous version?: Yes Reasons for the new version: Improved accuracy, improved visualization scripts. Nature of problem: Calculation of acoustic wave phase and group velocities, power flow angles and enhancement factors in homogeneous solids. Solution method: Solving the Christoffel equation by diagonalization; computing group velocities and enhancement factors by vector operations. Additional comments: The DSYEVJ3 [1] subroutine is included in the AWESoMe code file. Summary of revisions: New approach for sampling the unit sphere around the propagation direction l → : In the first version of the code, the direction of the group velocities was determined by the vector product of two vectors a → and b → , constructed with four points of the slowness surface evaluated at ( θ i ± d θ , ϕ i ) and ( θ i , ϕ i ± d ϕ ) , where ( θ i , ϕ i ) is the point of the unit sphere defined by the (phase) propagation direction l → . This approach had the drawbacks of been illdefined at the poles and also potentially leading to different precisions depending on ϕ (even though we always observed a perfect estimation of the group velocities in the test cases). In the present revision we introduce a more consistent approach for estimating the normal to the slowness surface. Now, the phase velocity is first evaluated at the point A ( θ i , ϕ i + d ϕ ) , and then at three points B , C and D obtained by rotating A by 90, 180 and 270 degrees around the propagation direction l → ( Fig. 1(a) ). The calculation of the normal to the slowness surface n → = a → × b → ( Fig. 1(b) ) is straightforward as in the previous version of the code, but now we ensure the same accuracy for all the points in the unit sphere. New approach for computing the enhancement factor: The calculation of the enhancement factor presented a similar issue. The 3 × 3 grid used for the estimation of the solid angles Δ Ω k and Δ Ω g employed the same points as those used for sampling the propagation directions l → in the unit sphere (cf. Fig. 3(b) in [2]). Thus, not only the accuracy was remarkably different depending on the azimuth ϕ , but also the grid was quite coarse and consequently the estimation of the enhancement factor was somewhat poor. In this new version, the points used for estimating Δ Ω k and Δ Ω g have an umbrellalike arrangement where the first point is ( θ i , ϕ i + Δ ϕ ) and the next 7 points are obtained by rotating the former in steps of 45 degrees around l → ( Fig. 1(c) ). As in the case of the group velocities, the accuracy is now independent of ϕ . Another problem with the numerical estimation of the enhancement factor is that a good estimation requires using small values of Δ ϕ , but for this a very high precision for computing the vectors u → and w → is needed (since the points A – H can be extremely close to P ). In consequence, in the standard version of the code we have optimized the Δ ϕ parameter to minimize the error in the enhancement factor, setting it to Δ ϕ = 0 . 0 2 ∘ . In addition, we now provide another version of the code where we employ quadruple precision instead of the usual double precision of the standard version. In this higherprecision version, Δ ϕ is set to ( 1 0 − 5 ) ∘ , which is enough to obtain the exact values (i.e. indistinguishable from that obtained with analytical methods) of the enhancement factor. We note that AWESoMe produces the exact values of all the other parameters (phase and group velocities, power flow angles and polarization vectors) ever since its first version [2], and only the exact determination of the enhancement factor remained to be settled. Improved visualization scripts: As a minor improvement, we have also updated the visualization scripts so that they are compatible with Gnuplot 5.X.X versions, and in addition we include two new scripts for plotting the normal surfaces and the ray surfaces ( Fig. 2 ). AWESoMe merged into CRYSTAL: Finally, we are happy to announce that the present AWESoMe version 1.1 (in its doubleprecision implementation) has been successfully merged into the development version of the CRYSTAL code [3]. Therefore, starting with its next stable version release, after CRYSTAL performs an automated calculation of the elastic tensor of a crystalline system (a feature that was already present in CRYSTAL14), AWESoMe can be internally run within CRYSTAL so that the user gets directly the same output provided by AWESoMe in addition to the usual CRYSTAL output. Again, we note that this will provide the exact values for phase and group velocities, polarization vectors and power flow angles, together with a reasonable estimation of the enhancement factor (for the exact values of the latter, simply run the quadruple precision version of the present release AWESoMe 1.1). Acknowledgments: We are very grateful to Alessandro Erba and Roberto Dovesi for merging AWESoMe into CRYSTAL. The development of AWESoMe was funded through the project TRA2009_0205 of the Spanish Ministerio de Ciencia e Innovación . We are thankful to the Centro de Servicios de Informática y Redes de Comunicaciones (CSIRC), University of Granada, for providing the computing time. [1] J. Kopp, Efficient numerical diagonalization of hermitian 3 × 3 matrices, Int. J. Mod. Phys. C 19 (2008) 523–548. [2] D. MuñozSantiburcio, A. HernándezLaguna, J. I. Soto, AWESoMe: A code for the calculation of phase and group velocities of acoustic waves in homogeneous solids, Comput. Phys. Commun. 192 (2015) 272–277. [3] R. Dovesi, R. Orlando, A. Erba, C.M. ZicovichWilson, B. Civalleri, S. Casassa, L. Maschio, M. Ferrabone, M. De La Pierre, P. D’Arco, Y. Noël, M. Causà, M. Rérat, B. Kirtman, CRYSTAL14: A program for the ab initio investigation of crystalline solids, Int. J. Quantum Chem. 114 (2014) 1287–1317. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site
5. APINetworks Java. A Java approach to the efficient treatment of largescale complex networks. [2016]

MuñozCaro, Camelia, Niño, Alfonso, Reyes, Sebastián, and Castillo, Miriam
Computer Physics Communications . Oct2016, Vol. 207, p549552. 4p.
 Subjects

APPLICATION program interfaces, COMPUTATIONAL complexity, JAVA (Computer program language), CODING theory, C++, and LARGE scale systems
 Abstract

We present a new version of the core structural package of our Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. The new version is written in Java and presents several advantages over the previous C++ version: the portability of the Java code, the easiness of objectoriented design implementations, and the simplicity of memory management. In addition, some additional data structures are introduced for storing the sets of nodes and edges. Also, by resorting to the different garbage collectors currently available in the JVM the Java version is much more efficient than the C++ one with respect to memory management. In particular, the G1 collector is the most efficient one because of the parallel execution of G1 and the Java application. Using G1, APINetworks Java outperforms the C++ version and the wellknown NetworkX and JGraphT packages in the building and BFS traversal of linear and complete networks. The better memory management of the present version allows for the modeling of much larger networks. New version program summary Program title: APINetworks Java 1.0 Program Files doi: http://dx.doi.org/10.17632/3pzd5v4chp.1 Licensing provisions: Apache License 2.0 Programming language: Java Journal Reference of previous version: Comput. Phys. Commun. 196 (2015) 446 Does the new version supersede the previous version? Yes Nature of problem: Due to the availability of large data collections, the computational modeling and analysis of largescale complex networks are becoming a topic of great interest. However, no single computational solution does exist to model and analyze efficiently large networks in different computational environments, especially when the networks are heterogeneous and dynamic. Solution method: To tackle the above problem, we have developed an Application Programming Interface, APINetworks, for the treatment of complex networks in arbitrary computational environments. By resorting to objectorientation and, in particular, to inheritance and polymorphism, APINetworks allows to describe heterogeneous and dynamic networks in arbitrary computational environments. Originally, a C++ version of the core structural package was developed. Reasons for the new version: A Java version seems very attractive over the C++ because of the ease of implementation of objectoriented designs; the portability of the code; the simplicity of memory management; and the availability of tools for parallel and distributed computing. In addition, the use of Java’s automatic garbage collection permits an efficient memory management, which, for a given amount of RAM memory, allows larger networks to be build and analyzed. Summary of revisions: The APINetworks Java version introduces some specializations to the general design presented in [1]. In particular, we make use of bounded generics [2] to allow for the use of an integer as node or edge key in network modeling classes. In this form, we can make explicit use of key handling methods in these classes without losing the generality provided by generics. To such an end, we introduce an interface Indexable , defining a getKey() method returning the integer used for identification purposes. This interface is implemented by the Node and Edge generic interfaces, introduced in [1], and in all its descendent classes. By defining the generic type in a given class as , we allow for the use of the getKey() method through any reference of the generic type. This capability is especially useful for node or edge processing within data structures. When working with nodes on a network, each node needs to store appropriate references to their incident edges. In the previous C++ APINetworks version [1] we make use of a linked list as defined in the C++ standard template library. In Java, we have a similar option: the LinkedList class implemented in the standard Java API. However, LinkedList is a double linked list. So, two references are used for each element stored in the list: one to refer to the previous element and other to the next. To reduce the amount of memory used and to increase the efficiency of graphrelated algorithms, which usually only needs to traverse the list of edges of each node, we have developed a singly linked list. Therefore, each node of the list stores only a reference to the next node and a data element. The singly linked list, APINetworksList , handles any generic element implementing the Indexable interface. To allow traversal of the list, we have developed an iterator. Therefore, the APINetworksList class implements the Iterable interface of the Java standard API, and an inner class APINetworksListIterator . This last implements the Iterator interface of the Java standard API. In short, the APINetworksList class returns an iterator than can be handled as any iterator of any other data structure of the Java API. In the previous C++ APINetworks implementation, the set of nodes and edges of a network can be represented through a generic growable array, among other possibilities. However, in Java, the data models used in arrays and generics are not fully compatible, since arrays are covariant and generics invariant [2]. Thus, it is not possible to allocate generic arrays. The solution is to allocate arrays of class Object and cast them to the generic type. Using our integer key as index for the array, the access to specific elements is done in constant time. In addition, in the present Java APINetworks version, we have introduced the use of hash tables [3] with the integer key of nodes or edges as hash code [3]. Again, access to specific elements is done in constant time. Another key point for the present APINetworks Java version is the efficiency of the automatic memory management. Thus, we test the behavior of the different GCs implemented in the JVM, using the demanding case of building a complete, fully connected, network. Here, each node is connected to every other. Thus, for n nodes, we have m = n ( n − 1 ) / 2 edges and the relationship between nodes and edges is quadratic, m = O ( n 2 ) . Different data structures are available in APINetwoks Java for representing the set of nodes and edges in networks. Here, we consider the arraybased one, since the simplicity of the memory access used in arrays makes it the most efficient data structure. For the tests, we consider the Parallel GC, which is the standard one included in the current Java 1.8 distribution, as well as the two concurrent GCs available: the Concurrent Mark Sweep (CMS) collector and the Garbage First (G1) collector. For each of them, we build complete networks ranging from 10 3 to 20⋅10 3 nodes in increments of 10 3 . The results, obtained in an OctaCore Intel ® Xeon ® E52630 v3 (2.4 GHz) with 48 GB of heap memory for the Java API, show that the G1 garbage collector gives the best performance followed by CMS and last, by the Parallel GC. Our data show that the relative difference between the garbage collectors increases with network size. In particular, for the largest network (20⋅10 3 nodes) the G1 and CMS collectors use only a 47% and 64% of the Parallel GC time, respectively. These values correspond to a speedup (defined as the quotient of the Parallel GC time to the other collectors time) of 2.1 and 1.6 for G1 and CMS, respectively. The performance of APINetworks Java is tested against the previous C++ version and two popular tools in the field: NetworkX [4], in Python, and JGraphT [5], in Java. As tests, we use two different network operations. The first is a basic one: the construction of the network in the linear and complete cases. The complete case has been already introduced. In the linear one, each node is linked only to the previous one. Thus, for n nodes, we have m = n − 1 edges and the relationship between nodes and edges is linear, m = O ( n ) . The second test uses the Breadth First Search (BFS) traversal of a network, which for n nodes and m edges exhibits O ( n + m ) time complexity [3, 6]. In all cases, we have used the G1 garbage collector for the Java APIs: APINetworks Java and JGraphT. In addition, we have selected arraybased data structures for APINetworks in its Java and C++ versions. NetworkX and JGraphT use hash tables. With JGraphT, we have used the package buildin lineal and complete graph generators. In all cases, we build networks of increasing size until the system memory is exhausted. For the linear case, we build different networks starting with 10 6 nodes and using an increment of 10 6 nodes. The results are collected in Fig. 1 case (a). We observe the lineal dependence of the running time versus the number of nodes, consequence of the linear relationship between the number of edges and nodes ( m = n − 1 ). The variation, in all cases, fits well a linear function of the type Time = a ⋅ n . The worst case is NetworkX with a coefficient of determination r 2 = 0.983 . On the other hand, Fig. 1 case (a) shows that APINetworks Java is the most time efficient package followed by APINetworks C++, JGraphT, and NetworkX. On the other hand, APINetworks Java builds networks with as much as 200 million nodes (in 67 seconds) versus the 80 million of APINetworks C++, the 100 million of JGraphT, and the 50 million of NetworkX. For the largest networks built with APINetworks C++, JGraphT and NetworkX, APINetworks Java is 1.8, 4.0, and 19.5 times faster, respectively. The large difference with NetworkX can be attributed to the interpreted nature of Python. For the complete case, we build networks starting with 10 3 nodes, incrementing the size in steps of 10 3 nodes. The results are shown in Fig. 1 case (b). Here, we observe nonlinear variations of the running time with the number of nodes due to the quadratic relationship between the number of edges and nodes ( m = n ( n − 1 ) / 2 ). The variation, now, fits well a quadratic function of the type Time = a ⋅ n 2 . Now, the worst case is JGraphT, which exhibits a coefficient of determination r 2 = 0.923 . As in the linear case, APINetworks Java is again the most efficient tool followed by APINetworks C++, NetworkX, and JgraphT. APINetworks Java processes, in 105 seconds, networks with up to 27 thousand nodes (350 million nodes + edges). This value can be compared to the 17 thousand nodes of APINetworks C++, and the 13 thousand nodes of NetworkX. For JGraphT, we have considered only data up to 4 thousand nodes, Fig. 1 case (b), since the running time becomes too large. For the largest APINetworks C++, NetworkX, and JGraphT networks considered, APINetworks Java is 7.0, 10.4, and 320.0 times faster, respectively. Fig. 1 Time (in seconds) used to build linear networks, case (a), and complete networks, case (b), as a function of the number of nodes, N, and the networks platform used. Squares represent APINetworks Java. Diamonds, triangles, and circles correspond to APINetworks C++, JGraphT and NetworkX, respectively. With respect to the BFS traversal in the linear case, Fig. 2 case (a) collects the results obtained in the comparative study. First, we observe a linear variation of the running time with the number of nodes. This is a consequence of the linear dependence between the number of nodes, n, and the number of edges, m = n − 1 , and the O ( n + m ) asymptotic complexity of the BFS procedure. Clearly, in the linear case, the asymptotic complexity [6] of BFS is, (1) O ( n + m ) = O ( n + n − 1 ) = O ( n ) . Despite the oscillations observed in Fig. 2 case (a), the four curves fit extremely well a Time = a ⋅ n linear function. In fact, the worst fit is found for the APINetworks Java case with a coefficient of determination r 2 = 0.942 . With respect to the relative performance, Fig. 2 case (a) shows that APINetworks Java is again the most efficient tool followed by APINetworks C++, JGraphT, and NetworkX. APINetworks Java needs 69 seconds to traverse the largest, 200 million nodes network. In particular, in relative terms, APINetworks Java is 1.2, 2.2, and 17.8 times faster than JGraphT, APINetworks C++, and NetworkX, respectively, for the largest network used by each package. Fig. 2 Time (in seconds) used to perform a BFS traversal in linear networks, case (a), and complete networks, case (b), as a function of the number of nodes, N, and the networks platform used. Squares represent APINetworks Java. Diamonds, triangles, and circles correspond to APINetworks C++, JGraphT and NetworkX, respectively. Finally, the results for the BFS traversal of complete networks are collected in Fig. 2 case (b). Now, we observe a nonlinear variation of the running time versus the number of nodes. This is due to the quadratic dependence between the number of nodes, n, and the number of edges, m = n ⋅ ( n − 1 ) / 2 , and the O ( n + m ) asymptotic complexity of the BFS procedure. The asymptotic complexity [6] of BFS in the current case is, (2) O ( n + m ) = O ( n + n ⋅ ( n − 1 ) 2 ) = O ( n 2 ) In all cases, the results fit well a Time = a ⋅ n 2 quadratic function. The worst result is obtained for NetworkX with a coefficient of determination r 2 = 0.901 . Now, we observe that the efficiency, in decreasing order, of the different packages is: JGraphT, APINetworks Java, and, with almost identical trend, APINetworks C++ and NetworkX. However, the results obtained for JGraphT are consistently too small. It seems that the builtin complete network is identified as such by the JGraphT BFS routine and only exploration of the nodes adjacent to the first one is allowed. Being a complete network, this implies all the nodes are already visited. So, JGraphT results cannot be compared to the other packages. On the other hand, APINetworks Java needs only 267 seconds to traverse the 27 thousand nodes complete network. For the largest networks considered in each case, APINetworks Java is 0.95 and 1.2 times faster than NetworkX and APINetworks C++, respectively. However, only in the two last NetworkX cases APINetworks Java is slightly slower (an 6%, in the worst case). Acknowledgments The authors wish to thank the Consejería de Educación y Ciencia de la Junta de Comunidades de CastillaLa Mancha [grant # PEII2014020A]. The economic support of the Universidad de CastillaLa Mancha is also acknowledged. References [1] A. Niño, C. MuñozCaro, S. Reyes, Computer Physics Communications, 196 (2015) 446454 [2] J. Bloch, Effective Java. Second edition. AddisonWesley, 2008 [3] M. T. Goodrich, R. Tamassia, M. H. Goldwasser, Data Structures and Algorithms in Java. 6th edition, International Student Version, Wiley, 2014 [4] NetworkX: http://networkx.github.io/; last access June 2016 [5] JgraphT: Java graph library: http://jgrapht.org/; last access June 2016 [6] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms 3rd Edition. The MIT Press, 2009 [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Kim, Jong Soo, Schmeier, Daniel, Tattersall, Jamie, and Rolbiecki, Krzysztof
Computer Physics Communications . Nov2015, Vol. 196, p535562. 28p.
 Subjects

LARGE Hadron Collider, STANDARD model (Nuclear physics), SIMULATION methods & models, RAPID prototyping, and PROGRAMMING languages
 Abstract

Check mate is a framework that allows the user to conveniently test simulated BSM physics events against current LHC data in order to derive exclusion limits. For this purpose, the data runs through a detector simulation and is then processed by a user chosen selection of experimental analyses. These analyses are all defined by signal regions that can be compared to the experimental data with a multitude of statistical tools. Due to the large and continuously growing number of experimental analyses available, users may quickly find themselves in the situation that the study they are particularly interested in has not (yet) been implemented officially into the Check mate framework. However, the code includes a rather simple framework to allow users to add new analyses on their own. This document serves as a guide to this. In addition, Check mate serves as a powerful tool for testing and implementing new search strategies. To aid this process, many tools are included to allow a rapid prototyping of new analyses. Website: http://checkmate.hepforge.org/ Program summary Program title: CheckMATE, AnalysisManager Catalogue identifier: AEUT_v1_1 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEUT_v1_1.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 181436 No. of bytes in distributed program, including test data, etc.: 2169369 Distribution format: tar.gz Programming language: C++, Python. Computer: PC, Mac. Operating system: Linux, Mac OS. Catalogue identifier of previous version: AEUT_v1_0 Journal reference of previous version: Comput. Phys. Comm. 187(2015)227 Classification: 11.9. External routines: ROOT, Python, Delphes (included with the distribution) Does the new version supersede the previous version?: Yes Nature of problem: The LHC has delivered a wealth of new data that is now being analysed. Both ATLAS and CMS have performed many searches for new physics that theorists are eager to test their model against. However, tuning the detector simulations, understanding the particular analysis details and interpreting the results can be a tedious and repetitive task. Furthermore, new analyses are being constantly published by the experiments and might be not yet included in the official CheckMATE distribution. Solution method: The AnalysisManager within CheckMATE framework allows the user to easily include new experimental analyses as they are published by the collaborations. Furthermore, completely novel analyses can be designed and added by the user in order to test models at higher centreofmass energy and/or luminosity. Reasons for new version: New features, bug fixes, additional validated analyses. Summary of revisions: New kinematic variables M_CT, M_T2bl, m_T, alpha_T, razor; internal likelihood calculation; missing energy smearing; efficiency tables; validated tautagging; improved AnalysisManager and code structure; new analyses; bug fixes. Restrictions: Only a subset of available experimental results have been implemented. Additional comments: Checkmate is built upon the tools and hard work of many people. If Checkmate is used in your publication it is extremely important that all of the following citations are included, • Delphes 3 [1]. • FastJet [2,3]. • Anti k t jet algorithm [4]. • CL s prescription [5]. • In analyses that use the M T 2 kinematical discriminant we use the Oxbridge Kinetics Library [6,7] and the algorithm developed by Cheng and Han [8] which also includes the M T 2 b l variable [9]. • In analyses that use the M C T family of kinematical discriminants we use MctLib [10,11] which also includes the M C T ⊥ and M C T I I variables [12]. • All experimental analyses that were used to set limits in the study. • The Monte Carlo event generator that was used. Running time: The running time scales about linearly with the number of input events provided by the user. The detector simulation/analysis of 20000 events needs about 50s/1s for a single core calculation on an Intel Core i53470 with 3.2 GHz and 8 GB RAM. References: [1] J. de Favereau, C. Delaere, P Demin, A. Giammanco, V. Lematre, et al., “DELPHES 3, A modular framework for fast simulation of a generic collider experiment”, 2013. [2] M. Cacciari, G. P Salam, and G. Soyez, “FastJet User Manual”, Eur. Phys. J., vol. C72, p. 1896, 2012. [3] M. Cacciari and G. P Salam, ”Dispelling the N3 myth for the kt jetfinder”, Phys. Lett., vol. B641, pp. 57–61, 2006. [4] M. Cacciari, G. P Salam, and G. Soyez, “The Antik(t) jet clustering algorithm”, JHEP, vol. 0804, p. 063, 2008. [5] A. L. Read, “Presentation of search results: the cl’s technique”, Journal of Physics G: Nuclear and Particle Physics, vol. 28, no. 10, p. 2693, 2002. [6] C. Lester and D. Summers, “Measuring masses of semiinvisibly decaying particles pair produced at hadron colliders”, Phys. Lett., vol. B463, pp. 99–103, 1999. [7] A. Barr, C. Lester, and P Stephens, “m(T2): The Truth behind the glamour”, J. Phys., vol. G29, pp. 2343–2363, 2003. [8] H.C. Cheng and Z. Han, “Minimal Kinematic Constraints and m(T2)”, JHEP, vol. 0812, p. 063, 2008. [9] Y. Bai, H.C. Cheng, J. Gallicchio, and J. Gu, “Stop the Top Background of the Stop Search”, JHEP, vol. 1207, p. 110, 2012. [10] D. R. Tovey, “On measuring the masses of pairproduced semiinvisibly decaying particles at hadron colliders”, JHEP, vol. 0804, p. 034, 2008. [11] G. Polesello and D. R. Tovey, “Supersymmetric particle mass measurement with the boostcorrected contransverse mass”, JHEP, vol. 1003, p. 030, 2010. [12] K. T. Matchev and M. Park, “A General method for determining the masses of semiinvisibly decaying particles at hadron colliders”, Phys. Rev. Lett., vol. 107, p. 061801, 2011. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Sarkadi, L.
Computer Physics Communications . Mar2017, Vol. 212, p283284. 2p.
 Subjects

BOUNDfree transitions, WAVE functions, RELATIVISTIC astrophysics, COULOMB'S law, and ELECTROSTATIC interaction
 Abstract

The program MTRDCOUL [1] calculates the matrix elements of the Coulomb interaction between a charged particle and an atomic electron, ∫ ψ f ∗ ( r ) ∣ R − r ∣ − 1 ψ i ( r ) d r . Boundfree transitions are considered, and relativistic hydrogenic wave functions are used. In this revised version a bug discovered in the F3Y CPC Program Library subprogram [2] is fixed. New version program summary Program Title: MTRDCOUL Program Files doi: “ http://dx.doi.org/10.17632/4cmts2c49b.1 ” Licensing provisions: GNU GPL v3 Programming language: Fortran 77 Journal reference of previous version: Comput. Phys. Commun. 141 (2001) 73 Does the new version supersede the previous version?: Yes Reasons for the new version: In some applications MTRDCOUL led to unexpected results that were traced back to the erroneous execution of the subprogram F3Y [2]. For example, in some cases F3Y yielded completely different values for the inputs ( l , m 1 , l , m 2 , l ′ , m ′ ) and ( l , m 2 , l , m 1 , l ′ , m ′ ) , while, for symmetry reason, one expects equal results. In the new version this error in F3Y was corrected. Summary of revisions: In the line AAQQ0036 of the original F3Y program [2] the incorrect assignment A 2 = L 1 − M 1 − N 2 was replaced by A 2 = L 1 − M 1 − N 1 . The corrected F3Y was tested by comparing its results with those obtained by a program written for the integral of the product of three spherical harmonics using the 369j program [3]. An agreement within 1 0 − 14 was found between the two calculations for all possible arguments of the function belonging to the values of l i up to 10. Nature of problem: The theoretical description of the excitation and ionization of atoms by charged particle impact often requires the knowledge of the matrix elements of the Coulomb interaction. Considering that the program can easily be extended to the calculations of matrix elements between wave functions other than the hydrogenic ones, it may find a broad application including the treatment of the electron–electron correlation problems. Solution method: The algorithm is based on the multipole series expansion of the Coulomb potential. Additional comments including Restrictions and Unusual features: The matrix elements are calculated with the following restrictions. The initial bound states are limited to 1 s 1 / 2 , 2 s 1 / 2 , 2 p 1 / 2 , 2 p 3 / 2 , 3 s 1 / 2 , 3 p 1 / 2 , 3 p 3 / 2 , 3 d 3 / 2 , 3 d 5 / 2 . The quantum number l in the final state has a maximum value of 10. Acknowledgments This work was supported by the National Scientific Research Foundation (OTKA, Grant No. K109440 ). [1] L. Lugosi and L. Sarkadi, Comput. Phys. Commun. 141 (2001) 73. [2] A. Liberato de Brito, Comput. Phys. Commun. 25 (1982) 81. [3] L. Wei, Comput. Phys. Commun. 120 (1999) 222; Erratum: 182 (2011) 1199. Appendix TEST RUN OUTPUT [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Duarte, L.G.S., da Mota, L.A.C.P., and Nunez, E.
Computer Physics Communications . Oct2016, Vol. 207, p542544. 3p.
 Subjects

DIFFERENTIAL invariants, FUNCTIONS (Mathematics), INVARIANTS (Mathematics), ORDINARY differential equations, and COMPUTATIONAL complexity
 Abstract

The method presented in Duarte and da Mota (2009) and Avellar et al. (2014) to search for first order invariants of second order ordinary differential equation (2ODEs) makes use of the so called Darboux polynomials . The main difficulty involved in this process is the determination of the Darboux polynomials, which is computationally very expensive. Here, we introduce an optional argument in the main routine that enables a shortcut in the calculations through the use of the S function associated with the 2ODE. New version program summary Program Title: FiOrDii Program Files doi: http://dx.doi.org/10.17632/dbrskczxtd.1 Licensing provisions: GNU General Public License 3 Programming language: Maple 17 Journal reference of previous version: Comput. Phys. Comm. 185(2014)307–316 Does the new version supersede the previous version?: Yes. Nature of problem: Determining first order invariants of second order ordinary differential equations. Solution method: The method of solution is published in [1]. Reasons for the new version: For certain 2ODEs the problem of determining the Darboux polynomials is computationally expensive or even unpractical. If a rational second order ordinary differential equation (2ODE) y ″ = ϕ ( x , y , z ) , ( z ≡ y ′ ) presents an elementary first integral (elementary first order invariant) and a rational S function (for the definition of S function and its uses see [3, 4]), then its integrating factor R can be put in a very special form [1]: (1) R = ∏ i p i n i where the p i are Darboux polynomials of the 2ODE and n i are rational numbers. Thus, the whole strategy of our method is based on the determination of these polynomials. However, the determination of the Darboux polynomials (in three variables) of degree higher than two is computationally very expensive. This fact leads, frequently, to a practical limitation. For certain 2ODEs, we can use an alternative way: we noted that, in many cases, the S function can be written in the form (2) S = f ( x ) + g ( x ) P ( x , y , z ) N , where N is the denominator of ϕ , f ( x ) and g ( x ) are functions to be determined and P is a polynomial (to be determined) in ( x , y , z ) . In these cases it is (in general) much easier to compute the S function directly, avoiding the complication involved in the determination of the Darboux polynomials. So, we extend the capability of our routine Invar by providing an optional argument called Sfunc . In this way, if we are not succeeding with the original call, we can try this optional argument. Let us see this in practice: Consider the following 2EDO: (3) z ′ = x 3 y z + 2 x 3 z 2 − z x 3 − 3 x 2 z y − 3 x 2 z 2 + 3 x 2 z − y 2 − y z + z 2 + y − z x 3 y − x 3 + y . The Maple command dsolve cannot deal with it (after consuming approximately 190 Mb of memory and 42 s of CPU time): s o l ≔ t ≔ 42.391 Trying our command ( Invar ) in its original form, we cannot improve the output (even using the parameter Deg ): We could not find an Invariant, try different settings. We could not find an Invariant, try different settings. t ≔ 28.687 Error, (in SolveTools:cleanup) time expired t ≔ 330.891 where the memory spent was ≈ 230 Mb and ≈ 280 Mb with Deg=2 and Deg=3 , respectively. With the optional argument Sfunc , we get i n v ≔ ( z x 3 − y ) e − x z + y − 1 t ≔ 0.516 . The invariant is obtained quickly and can be used to fully integrate the 2ODE (4) through the command Invsolve (as usual): ( z x 3 − y ) e − x z + y − 1 is indeed a first order invariant of d 2 d x 2 y ( x ) = 1 y ( x ) x 3 − x 3 + y ( x ) ( 2 ( d d x y ( x ) ) 2 x 3 + ( d d x y ( x ) ) y ( x ) x 3 − 3 ( d d x y ( x ) ) 2 x 2 − 3 ( d d x y ( x ) ) y ( x ) x 2 − ( d d x y ( x ) ) x 3 + 3 ( d d x y ( x ) ) x 2 + ( d d x y ( x ) ) 2 − ( d d x y ( x ) ) y ( x ) − ( y ( x ) ) 2 − d d x y ( x ) + y ( x ) ) The solution for the first order invariant regarded as a 1ODE is y ( x ) = ( ∫ − e x − ∫ K + e x K x 3 − e x d x ( K x 3 − e x ) − 1 d x + _C1 ) e ∫ K + e x K x 3 − e x d x The parameter Deg can be used in association with Sfunc but, in this context, it indicates degree of the polynomial P (see Eq. (2) ). Consider the 2ODE (4) z ′ = z y x 2 + x 2 z 2 + x z 2 − y x − 2 x z − z 2 − y + 1 y x 2 − x + 1 . The Invar command with its default degree ( Deg=1 ) does not produce a positive answer but with Deg=2 we have: i n v ≔ e x ( y x + z − 1 ) z x − 1 . For this type of 2ODE (presenting a S function in the form (2) ) we show a small table with the performance of the command: Table  Comparative performance of the Invar command ϕ Invar (original) Invar ( S function) (rhs of the 2ODE) Result Time Deg Result Time Deg − − x 4 z + 2 z 2 x 3 − x 2 y + y x − z y x 2 ( − x 3 + y ) – 300 3 e − 1 x ( z x 2 − y ) z − x 0.11 1 − z y x 2 + 2 z y x + 2 z 2 x − y x 2 y + 1 ln ( z + y z x 2 − 1 ) − x 177 3 e x ( z x 2 − 1 ) z + y 0.33 1 z y 2 x − z 2 y x + z 3 x − z y 2 + z 2 y − y x y 2 − 1 – 300 3 ( x y z − 1 ) e − x − y + z 0.53 3 Observations: • The CPU time is measured in seconds and we set the time limit to five minutes (300 s). • The builtin Maple command dsolve cannot solve (nor reduce) any of these 2ODEs (in any time). • The first two 2ODEs (Table) can be fully solved by the command InvSolve . The third one can only be reduced (through the Invar command). • The Invar command in its original form could deal (theoretically) with all these 2ODEs. However, in practice, it cannot handle the 2ODEs 1 and 3 (Table), because the notebook’s memory is not enough. Summary of revisions: Instructions to use the FiOrDii package (README.pdf), computational routines (FiOrDii.txt) and test file(FiOrDii.mw). Restrictions: If, for the ODE under consideration, the S function is not of the form (2) , then the upgrade does not apply. Unusual features: Our implementation not only searches for differential first order invariants, but can also be used as a research tool that allows the user to follow all the steps of the procedure (for example, we can calculate the associated “ D ” operator, the corresponding Darboux polynomials, and associated cofactors, etc.). In addition, since our package is based on recent theoretical developments [1], it can successfully reduce some rational 2ODEs that were not solved (or reduced) by some of the bestknown methods available. The optional parameter Sfunc (which bypass some drawbacks caused by the presence of high degree Darboux polynomials in the process) allows the command Invar to use ‘directly’ the S function to (in some cases) find the invariant. Running time: This depends strongly on the ODE, but usually under 4 s. Acknowledgment L.G.S. Duarte and L.A.C.P. da Mota would like to thank FAPERJ (Fundaçāo de Amparo á Pesquisa do Estado do Rio de Janeiro) for financial support. References [1] L.G.S. Duarte and L.A.C.P. da Mota, Finding elementary first integrals for rational second order ordinary differential equations J. Math. Phys., 50 , (2009). [2] J. Avellar, L.G.S. Duarte, S.E.S. Duarte and L.A.C.P. da Mota, A maple package to find first order differential invariants of 2ODEs via a Darboux approach , Computer Physics Communications, 185 , (2014). [3] L.G.S. Duarte, S.E.S. Duarte, L.A.C.P. da Mota and J.E.F. Skea, Solving second order ordinary differential equations by extending the Prelle–Singer method , Journal of Physics. A, Mathematical and General., v.34, p.3015  3024, L.G.S. Duarte, S.E.S. Duarte and L.A.C.P. da Mota, A semialgorithm to find elementary first order invariants of rational second order ordinary differential equations , Applied Mathematics and Computation, v.184, 211, (2007), L.G.S. Duarte and L.A.C.P. da Mota, 3D polynomial dynamical systems with elementary first integrals , J. Phys. A: Math. Theor. 43 (2010). [4] V.K. Chandrasekar, M. Senthilvelan and M. Lakshmanan, On the complete integrability and linearization of certain secondorder nonlinear ordinary differential equations , Proc. R. Soc. A v.461, 2451–2476, (2005), V.K. Chandrasekar, M. Senthilvelan and M. Lakshmanan, Extended Prelle–Singer Method and Integrability/Solvability of a Class of Nonlinear nth Order Ordinary Differential Equations , Journal of Nonlinear Mathematical Physics v.12, Supplement 1, 184–201, (2005). [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Alves, P.R.L., Duarte, L.G.S., and da Mota, L.A.C.P.
Computer Physics Communications . Oct2016, Vol. 207, p539541. 3p.
 Subjects

MATHEMATICAL mappings, TIME series analysis, PHASE space, COMPUTATIONAL complexity, PROGRAMMING languages, and DIFFERENTIAL equations
 Abstract

The Maple package TimeS for time series analysis has a new feature and an improvement in forecasting by phase space reconstruction. An optional argument in the computational routines that allows the researcher to choose the different number of steps ahead to forecast. This update extends the running of the package with this new feature for the current versions of the Maple software too. New version program summary Program Title: TimeS Program Files doi: 10.17632/nhtmjc8yp8.1 Licensing provisions: GNU General Public License 3 Programming language: Maple17 Journal reference of previous version: Comput. Phys. Comm. 185 (2014) 1115 Does the new version supersede the previous version?: Yes. Nature of problem: Time series analysis and improving forecast capability. Solution method: The method of solution is published in [1]. Reasons for the new version: For a phenomenon described by an unknown lowdimensional dynamical system defined by a set of coupled differential equations x ̇ i = f i ( x ) , i = 1 , … , n , one generates a map M for which the time variable is increased by δ t : x i ( P + 1 ) = F i ( x ( P ) , δ t ) . Though another map M ¯ that approaches M , X ¯ i ( P + 1 ) is close to x i ( P + 1 ) when δ t → 0 [1]. (1) X ¯ i ( P + 1 ) = F ¯ i ( x ( P ) , δ t ) = ∑ k = 0 N ( δ t ) k k ! X k [ x i ( P ) ] . In the reconstruction’s scheme by delays method [2], x ( P ) is state vector reconstructed. Its components have regular spacings in the time series. The approach X ¯ i ( P + 1 ) corresponds to the prediction for the value x i ( P + 1 ) [3]. If one applies δ t → τ δ t ′ in the mapping (1) , then (2) X ¯ i ( P + τ ) = F ¯ i ( x ( P ) , τ δ t ′ ) = ∑ k = 0 N ( τ δ t ′ ) k k ! X k [ x i ( P ) ] . This application enables the choose for different times of prediction, e.g., in chaotic time series [4]. So, other entries beyond the nearest neighbor to the last known value can be predicted and analyzed. Here, we extend the capacity of forecasting and analysis of the computational routines by means the optional argument called PT . It corresponds to parameter τ in the mapping (2) . The researcher can include this option in the commands GfiTS , ForecasTS and IforecasTS . In the NIforecasTS command, our package was already trying to tackle a similar calculation. But it employs the global mapping corresponding to the next entry in the time series, i.e., only the parameter τ = 1 take part in the global fitting. The program predicts the N steps by the next step (corresponding to N = 1 ) and using it as the N = 2 data and so on, up to the actual value of N that we are trying to forecast. Because this different conception, the routines NIforecasTS , AnalysTS and GrafiTS have not changed with respect to their running. The use of the commands in this version of the TimeS package closely follows the previous version. But the programming logistic in the GifTS routine required an update for the correct running of the new features in the current versions of the Maple software. We considered our internal procedure gerpoly deprecated–it does not work properly in the Maple 17 release–and it has been deleted in this version. The polynomials are now generated inside the GfiTS routine. There is a change in the optional argument that selects the part of the time series for the global mapping. The choosing now refers to the last vector to be used in the global mapping. The argument IniPoint was substituted by Final for all routines in this update. The goal of this new option is to relate more easily the global mapping with the position on the time series of interest in the forecasting. Another slight modification is present for the error calculation by the optional argument Poptions . Instead of the percent error, this analysis option now prints the actual error. Let us consider the same time series of the original paper [1]. It is stored in the file ‘ts37.txt’ and corresponds to the dynamical variable X of the Lorenz System [5]. Below, we present the Maple worksheet for the reconstruction of phase space, the global mapping, the forecasting and the improvement of the forecast. The respective routines in this example are VecTS , GfiTS , ForecasTS and IforecasTS . The date to be forecast is dat[402] . − 0.03991323747 X 1 2 + 0.1157254791 X 1 X 2 − 0.05432130896 X 1 X 3 − 0.03977733053 X 2 2 + 0.03040753173 X 2 X 3 − 0.005038168581 X 3 2 + 1.487438057 X 1 − 0.3816889428 X 2 + 0.09738575830 X 3 9.2926021229.231602460 9.2926021229.278554589 . Thus, for the forecasting and analysis of the second entry in a given time series (i.e., τ = 2 ), the argument is PT=2 and so on. In order to compare the same prediction by the command NIforecasTS , we put its respective prompt too: 9.333733567 . The argument Nsteps=2 specifies the parameter τ = 2 and the map Mapag[1] corresponds to τ = 1 above. In this application, the prediction with the new feature in the IforecasTS is more accurate–9.279 compared to 9.333 for the true value 9.293–than the result of the command NIforecasTS . Besides being an alternative for improved accuracy, the new feature enlarges the possibilities in time series analysis by the TimeS package. Summary of revisions: Modification of the way the polynomials needed are generated, making the running of the program compatible with the new version (release 17 up) Maple; The introduction of the possibility of the calculating for the prediction N steps ahead that improved our previous similar calculation; Instructions to use the TimeS package (README.pdf), computational routines (TimeS.txt) and test file (TimeS.mw). Restrictions: If the time series that is being analyzed presents a great amount of noise or if the dynamical system behind the time series is of high dimensionality ( D i m ≫ 3 ), then the method may not work well. Unusual features: Our implementation can, in the cases where the dynamics behind the time series is given by a system of low dimensionality, greatly improve the forecast. Running time: It depends strongly on the command used. [1] H. Carli, L. Duarte, L. da Mota, A maple package for improved global mapping forecast, Computer Physics Communications 185 (3) (2014) 1115–1129. [2] N.H. Packard, J.P. Crutchfield, J.D. Farmer, R.S. Shaw, Geometry from a time series, Phys. Rev. Lett. 45 (1980) 712–716. [3] H. Kantz, T. Schreiber, Nonlinear Time Series Analysis, Cambridge nonlinear science series, Cambridge University Press, 2004. [4] J.D. Farmer, J.J. Sidorowich, Predicting chaotic time series, Phys. Rev. Lett. 59 (1987) 845–848. [5] E. Lorenz, Deterministic nonperiodic flow, Journal of the Atmospheric Sciences 20 (2) (1963) 130–141. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Bonhommeau, David A.
Computer Physics Communications . Nov2015, Vol. 196, p614616. 3p.
 Subjects

MONTE Carlo method, THERMODYNAMICS, MULTIPLY charged ions, PROGRAMMING languages, COMPUTER operating systems, and ELECTROSPRAY ionization mass spectrometry
 Abstract

This new version of the MCMC 2 program for modeling the thermodynamic and structural properties of multiplycharged clusters fixes some minor bugs present in earlier versions. A figure representing the required RAM per replica as a function of the cluster size ( N ≤ 20000 ) is also provided as benchmark. New version program summary Program title: MCMC 2 Catalogue identifier: AENZ_v1_2 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AENZ_v1_2.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 143653 No. of bytes in distributed program, including test data, etc.: 1396311 Distribution format: tar.gz Programming language: Fortran 90 with MPI extensions for parallelization. Computer: x86 and IBM platforms. Operating system: 1. CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) + MPICH2; 2. CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95 /pgf90 + MPICH2; 3. Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran + IntelMPI; 4. IBM Power 6 4.7 GHz, xlf + PESSL (IBM parallel library). Has the code been vectorized or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: Up to 999 RAM: (per CPU core) 10–20 MB. The physical memory needed for the simulation depends on the cluster size, the values indicated are typical for small or mediumsized clusters ( N ≤ 300 − 400 ). The size of A n + N clusters ( N = number of particles, n = number of charged particles with n ≤ N ) should not exceed 1.6 × 10 4 (respectively 2.0 × 10 4 ) particles on servers with 2 GB (respectively 3 GB) of RAM per CPU core if n = 0 (neutral clusters) or n = N (“fullycharged” clusters). For charged clusters composed of neutral and charged particles (e.g., n = N / 2 ), the maximum cluster size can drop to 1.4 × 10 4 and 1.8 × 10 4 on servers with 2 GB and 3 GB of RAM, respectively (see the figure given in Supplementary Material). Supplementary material: A figure showing the amount of RAM required per replica as a function of the size of A n + N clusters can be downloaded. Supplementary material related to this article can be found online at http://dx.doi.org/10.1016/j.cpc.2015.06.017 . The following is the Supplementary material related to this article. MMC S1 Amount of RAM required per replica (in GB) as a function of the cluster size. The calculations have been performed without taking into account polarization Catalogue identifier of previous version: AENZ_v1_1 Journal reference of previous version: Comput. Phys. Comm. 185(2014)1188 Classification: 23. Does the new version supersede the previous version?: Yes Nature of problem: We provide a general parallel code to investigate structural and thermodynamic properties of multiply charged clusters. Solution method: Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature) having different particle charges or numbers of charged particles are distributed on different CPUs. Reasons for new version: This new version of the MCMC 2 program for modeling the thermodynamic and structural properties of multiplycharged clusters fixes some minor bugs present in earlier versions. A figure representing the required RAM per replica as a function of the cluster size ( N ≤ 20000 ) is also provided as benchmark. Summary of revisions: 1. Additional features of MCMC 2 version 1.1.1: Same as in the previous version; 2. Modifications or corrections to MCMC 2 version 1.1 [2,3] (a) Several minor bugs were fixed in this version i. A default value for the integer “irand”, used to select the type of random number generator (keyword SEED, subkeyword METHOD), was missing. It is set to 0. ii. The subkeyword “EVERY” used to define the frequency of statistics printing (keyword “STATISTICS”) was missing and it has been implemented in the program. Before version 1.1.1, the choice entered into the setup file was simply ignored and the frequency was always set to its default value, namely a printing every 100 Monte Carlo sweeps. (b) Some useless integers are removed from subroutines in lib4pol.f90 and lib4dampol.f90 and some test runs are slightly modified. In particular, in test run 2, the particle and probe diameters used to evaluate the number of surface particles were fixed to 0.8 and 1.2, respectively (see keyword “SURFACE”). Actually, the probe diameter should be smaller than the particle diameter [4] and the two values were therefore swapped. (c) The subroutines dLJ_nopol_hom (in lib4nopol.f90), dLJ_pol_hom (in lib4pol.f90), and dLJ_dampol_hom (in lib4dampol.f90) are renamed dLJ_nopol, dLJ_pol, and dLJ_dampol, respectively, to avoid any ambiguity. The suffix “Hom”, that stood for “homogeneity” in order to indicate that LennardJones interactions between particles were the same, was improper since homogeneity is commonly related to invariance by translation and all the properties of multiply charged clusters cannot be considered invariant by translation in the most general case. The renaming of the three subroutines has obviously no influence on the results and some related comments have been modified accordingly. Restrictions: The current version of the code uses LennardJones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge–charge, charge–induced dipole, induced dipole–induced dipole, polarization). Furthermore, the Monte Carlo simulations can only be performed in the N V T ensemble and the size of charged clusters should not exceed 2.0 × 10 4 particles on CPU cores with less than 3GB of RAM each. It is worth noting that the latter restriction is not significantly crippling since MCMC 2 should be mainly devoted to the investigation of mediumsized cluster properties due to the difficulty to converge Monte Carlo simulations on large systems ( N ≥ 10 3 ) [1]. Unusual features: The Parallel Charging methods, based on the same philosophy as Parallel Tempering but with particle charges and number of charged particles as parameters instead of temperature, is an interesting new approach to explore energy landscapes. Splitting of the simulations is allowed and averages are accordingly updated. Running time: The running time depends on the number of Monte Carlo steps, cluster size, and the type of interactions selected (e.g., polarization turned on or off, and method used for calculating the induced dipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for small clusters ( N ≤ 100 , not including polarization interactions), to one week for large clusters ( N ≥ 1000 not including polarization interactions), and several weeks for large clusters ( N ≥ 1000 ) when including polarization interactions. A restart procedure has been implemented that enables a splitting of the simulation accumulation phase. References: [1] E. Pahl, F. Calvo, L. Koci, P. Schwerdtfeger, Accurate Melting Temperatures for Neon and Argon from Ab Initio Monte Carlo Simulations, Angew. Chem. Int. Ed. 47 (2008) 8207–8210. [2] D.A. Bonhommeau, M.P. Gaigeot, MCMC 2 : A Monte Carlo code for multiplycharged clusters, Comput. Phys. Commun. 184 (2013) 873–884. [3] D.A. Bonhommeau, M. Lewerenz, M.P. Gaigeot, MCMC 2 (version 1.1): A Monte Carlo code for multiplycharged clusters, Comput. Phys. Commun. 185 (2014) 1188–1191. [4] M.A. Miller, D.A. Bonhommeau, C.J. Heard, Y. Shin, R. Spezia, M.P. Gaigeot, Structure and stability of charged clusters, J. Phys.: Condens. Matter. 24 (2012) 284130. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site

Dobaczewski, J. and Olbratowski, P.
Computer Physics Communications . May2005, Vol. 167 Issue 3, p214216. 3p.
 Subjects

SKYRME model, CARTESIAN linguistics, HARMONIC oscillators, and COMPUTER programming
 Abstract

Abstract: We describe the new version (v2.08k) of the code HFODD which solves the nuclear Skyrme–Hartree–Fock or Skyrme–Hartree–Fock–Bogolyubov problem by using the Cartesian deformed harmonicoscillator basis. Similarly as in the previous version (v2.08i), all symmetries can be broken, which allows for calculations with angular frequency and angular momentum tilted with respect to the mass distribution. In the new version, three minor errors have been corrected. New Version Program Summary: Title of program: HFODD; version: 2.08k Catalogue number: ADVA Catalogue number of previous version: ADTO (Comput. Phys. Comm. 158 (2004) 158) Program summary URL: http://cpc.cs.qub.ac.uk/summaries/ADVA Program obtainable from: CPC Program Library, Queen''s University of Belfast, N. Ireland Does the new version supersede the previous one: yes Computers on which this or another recent version has been tested: SG Power Challenge L, PentiumII, PentiumIII, AMDAthlon Operating systems under which the program has been tested: UNIX, LINUX, Windows2000 Programming language used: Fortran Memory required to execute with typical data: 10M words No. of bits in a word: 64 No. of lines in distributed program, including test data, etc.: 52 631 No. of bytes in distributed program, including test data, etc.: 266 885 Distribution format:tar.gz Nature of physical problem: The nuclear meanfield and an analysis of its symmetries in realistic cases are the main ingredients of a description of nuclear states. Within the Local Density Approximation, or for a zerorange velocitydependent Skyrme interaction, the nuclear meanfield is local and velocity dependent. The locality allows for an effective and fast solution of the selfconsistent Hartree–Fock equations, even for heavy nuclei, and for various nucleonic (nparticle nhole) configurations, deformations, excitation energies, or angular momenta. Similar Local Density Approximation in the particle–particle channel, which is equivalent to using a zerorange interaction, allows for a simple implementation of pairing effects within the Hartree–Fock–Bogolyubov method. Solution method: The program uses the Cartesian harmonicoscillator basis to expand singleparticle or singlequasiparticle wave functions of neutrons and protons interacting by means of the Skyrme effective interaction and zerorange pairing interaction. The expansion coefficients are determined by the iterative diagonalization of the mean field Hamiltonians or Routhians which depend nonlinearly on the local neutron and proton densities. Suitable constrains are used to obtain states corresponding to a given configuration, deformation or angular momentum. The method of solution has been presented in [J. Dobaczewski, J. Dudek, Comput. Phys. Comm. 102 (1997) 166]. Summary of revisions: 1. Incorrect value of the “” force parameter for SLY5 has been corrected. 2. Opening of an empty file “FILREC” for IWRIRE=−1 has been removed. 3. Call to subroutine “OLSTOR” has been moved before that to “SPZERO”. In this way, correct data transferred to “FLISIG”, “FLISIM”, “FLISIQ” or “FLISIZ” allow for a correct determination of the candidate states for diabatic blocking. These corrections pertain to the user interface of the code and do not affect results performed for forces other than SLY5. Restrictions on the complexity of the problem: The main restriction is the CPU time required for calculations of heavy deformed nuclei and for a given precision required. Pairing correlations are only included for even–even nuclei and conserved simplex symmetry. Unusual features: The user must have access to the NAGLIB subroutine F02AXE or to the LAPACK subroutines ZHPEV or ZHPEVX, which diagonalize complex Hermitian matrices, or provide another subroutine which can perform such a task. The LAPACK subroutines ZHPEV and ZHPEVX can be obtained from the Netlib Repository at University of Tennessee, Knoxville: http://netlib2.cs.utk.edu/cgibin/netlibfiles.pl?filename=/lapack/complex16/zhpev.f and http://netlib2.cs.utk.edu/cgibin/netlibfiles.pl?filename=/lapack/complex16/zhpevx.f, respectively. The code is written in singleprecision for use on a 64bit processor. The compiler option r8 or +autodblpad (or equivalent) has to be used to promote all real and complex singleprecision floatingpoint items to double precision when the code is used on a 32bit machine. Typical running time: One Hartree–Fock iteration for the superdeformed, rotating, parity conserving state of 152 66Dy86 takes about six seconds on the AMDAthlon 1600+ processor. Starting from the Woods–Saxon wave functions, about fifty iterations are required to obtain the energy converged within the precision of about 0.1 keV. In the case when every value of the angular velocity is converged separately, the complete superdeformed band with precisely determined dynamical moments can be obtained within forty minutes of CPU on the AMDAthlon 1600+ processor. This time can be often reduced by a factor of three when a selfconsistent solution for a given rotational frequency is used as a starting point for a neighboring rotational frequency. Additional comments: The actual output files obtained during user''s test runs may differ from those provided in the distribution file. The differences may occur because various compilers may produce different results in the following aspects: [(a)] The initial Nilsson spectrum (the starting point of each run) is Kramers degenerate, and thus the diagonalization routine may return the degenerate states in arbitrary order and in arbitrary mixture. For an odd number of particles, one of these states becomes occupied, and the other one is left empty. Therefore, starting points of such runs can widely vary from compiler to compiler, and these differences cannot be controlled. [(b)] For axial shapes, two quadrupole moments (with respect to two different axes) become very small and their values reflect only a numerical noise. However, depending on which of these two moments is smaller, the intrinsicframe Euler axes will differ, most often by 180 degrees. Hence, signs of some moments and angular momenta may vary from compiler to compiler, and these differences cannot be controlled. These differences are insignificant. The final energies do not depend on them, although the intermediate results can. [Copyright &y& Elsevier]
 Full text View on content provider's site

Gonze, X., Jollet, F., Abreu Araujo, F., Adams, D., Amadon, B., Applencourt, T., Audouze, C., Beuken, J.M., Bieder, J., Bokhanchuk, A., Bousquet, E., Bruneval, F., Caliste, D., Côté, M., Dahm, F., Da Pieve, F., Delaveau, M., Di Gennaro, M., Dorado, B., and Espejo, C.
Computer Physics Communications . Aug2016, Vol. 205, p106131. 26p.
 Subjects

COMPUTER software, DENSITY functional theory, ELECTRONIC structure, MANYbody perturbation calculations, PROGRAMMING languages, and ELASTICITY
 Abstract

ABINIT is a package whose main program allows one to find the total energy, charge density, electronic structure and many other properties of systems made of electrons and nuclei, (molecules and periodic solids) within Density Functional Theory (DFT), ManyBody Perturbation Theory (GW approximation and Bethe–Salpeter equation) and Dynamical Mean Field Theory (DMFT). ABINIT also allows to optimize the geometry according to the DFT forces and stresses, to perform molecular dynamics simulations using these forces, and to generate dynamical matrices, Born effective charges and dielectric tensors. The present paper aims to describe the new capabilities of ABINIT that have been developed since 2009. It covers both physical and technical developments inside the ABINIT code, as well as developments provided within the ABINIT package. The developments are described with relevant references, input variables, tests and tutorials. Program summary Program title: ABINIT Catalogue identifier: AEEU_v2_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEEU_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 4845789 No. of bytes in distributed program, including test data, etc.: 71340403 Distribution format: tar.gz Programming language: Fortran2003, PERL scripts, Python scripts. Classification: 7.3, 7.8. External routines: (all optional) BigDFT [2], ETSF_IO [3], libxc [4], NetCDF [5], MPI [6], Wannier90 [7], FFTW [8]. Catalogue identifier of previous version: AEEU_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2582 Does the new version supersede the previous version?: Yes. The abinit7.10.5 version is now the up to date stable version of ABINIT Nature of problem: This package has the purpose of computing accurately material and nanostructure properties: electronic structure, bond lengths, bond angles, primitive cell size, cohesive energy, dielectric properties, vibrational properties, elastic properties, optical properties, magnetic properties, nonlinear couplings, electronic and vibrational lifetimes, and others. Solution method: Software application based on Density Functional Theory, ManyBody Perturbation Theory and Dynamical Mean Field Theory, pseudopotentials, with plane waves or wavelets as basis functions. Reasons for new version: Since 2009, the abinit5.7.4 version of the code has considerably evolved and is not yet up to date. The abinit 7.10.5 version contains new physical and technical features that allow electronic structure calculations impossible to carry out in the previous versions. Summary of revisions: • new physical features: quantum effects for the nuclei treated by the Pathintegral Molecular Dynamics; finding transition states using image dynamics (NEB or string methods); two component DFT for electronpositron annihilation; linear response in a Projector AugmentedWave approach PAW, electronphonon interactions and temperature dependence of the gap; Bethe Salpeter Equation BSE; Dynamical Mean Field Theory (DMFT). • new technical features: development of a PAW approach for a wavelet basis; parallelisation of the code on more than 10,000 processors; new build system. • new features in the ABINIT package: tests; test farm; new tutorials; new pseudopotentials and PAW atomic data tables; GUI and postprocessing tools like the AbiPy and APPA libraries. Running time: It is difficult to answer to the question as the use of ABINIT is very large. On one hand, ABINIT can run on 10,000 processors for hours to perform quantum molecular dynamics on large systems. On the other hand, tutorials for students can be performed on a laptop within a few minutes. References: 1 http://www.gnu.org/copyleft/gpl.txt 2 http://bigdft.org 3 http://www.etsf.eu/fileformats 4 http://www.tddft.org/programs/octopus/wiki/index.php/Libxc 5 http://www.unidata.ucar.edu/software/netcdf 6 https://en.wikipedia.org/wiki/Message_Passing_Interface 7 http://www.wannier.org 8 M. Frigo and S.G. Johnson, Proceedings of the IEEE, 93, 216–231 (2005). [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site
13. GetDDM: An open framework for testing optimized Schwarz methods for timeharmonic wave problems. [2016]

Thierry, B., Vion, A., Tournier, S., El Bouajaji, M., Colignon, D., Marsic, N., Antoine, X., and Geuzaine, C.
Computer Physics Communications . Jun2016, Vol. 203, p309330. 22p.
 Subjects

SCHWARZ function, ANALYTIC functions, ELECTRICAL harmonics, FINITE element method, and HELMHOLTZ equation
 Abstract

We present an open finite element framework, called GetDDM, for testing optimized Schwarz domain decomposition techniques for timeharmonic wave problems. After a review of Schwarz domain decomposition methods and associated transmission conditions, we discuss the implementation, based on the open source software GetDP and Gmsh. The solver, along with readytouse examples for Helmholtz and Maxwell’s equations, is freely available online for further testing. Program summary Program title: GetDDM Catalogue identifier: AEZZ_v1_0 Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEZZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: GNU GPL v2 No. of lines in distributed program, including test data, etc.: 1426800 No. of bytes in distributed program, including test data, etc.: 12362781 Distribution format: tar.gz Programming language: Gmsh ( http://gmsh.info ) and GetDP ( http://getdp.info ). Computer: PC, Mac, Tablets, Computer clusters. Operating system: Linux, Windows, MacOSX. Has the code been vectorized or parallelized?: Yes RAM: From 512 Megabytes upwards Classification: 4.3, 4.12, 6.5, 10. Nature of problem: Computing the solution of large scale timeharmonic acoustic and electromagnetic wave problems. Solution method: Finite element method with optimized Schwarz domain decomposition method. Running time: From a few seconds for simple problems to several days for largescale simulations. [ABSTRACT FROM AUTHOR]
 Full text View on content provider's site
Catalog
Books, media, physical & digital resources
 Catalog results include