In this paper, we introduce a new image denoising model: the damped flow (DF), which is a second order nonlinear evolution equation associated with a class of energy functionals of an image. The existence, uniqueness and regularization property of DF are proven. For the numerical implementation, based on the Störmer–Verlet method, a discrete DF, SV-DDF, is developed. The convergence of SV-DDF is studied as well. Several numerical experiments, as well as a comparison with other methods, are provided to demonstrate the efficiency of SV-DDF.
We have studied examples of random walk methods (RWM) applied to electric problems, solving the Laplace equation. Especially we did calculationswith digital pictures as domains, and evaluated an improvement in calculation speed for the random walk method. The latter was done by varying thestep length depending on the distance to boundaries and obstructions in thedomain. Statistical numerical experiments have been used to see if the improved method had an effect on the accuracy of the solution. The results inthis work have been compared with the finite element method (FEM) usingthe commercial software Comsol. We have looked at four different geometrieswith varying complexity with respect to inner boundary conditions.
We consider a recently proposed gyroscopic device for conversion of mechanical ocean wave energy to electrical energy. Two models of the device derived from standard engineering mechanics from the literature are analysed, and a model is derived from analytical mechanics considerations. From these models, estimates of the power production, efficiency, forces and moments are made. We find that it is possible to extract a significant amount of energy from an ocean wave using the described device. Further studies are required for a full treatment of the device.
Adsorption isotherms are the most important parameters in rigorous models of chromatographic processes. In this paper, in order to recover adsorption isotherms, we consider a coupled complex boundary method (CCBM), which was previously proposed for solving an inverse source problem [2]. With CCBM, the original boundary fitting problem is transferred to a domain fitting problem. Thus, this method has advantages regarding robustness and computation in reconstruction. In contrast to the traditional CCBM, for the sake of the reduction of computational complexity and computational cost, the recovered adsorption isotherm only corresponds to the real part of the solution of a forward complex initial boundary value problem. Furthermore, we take into account the position of the profiles and apply the momentum criterion to improve the optimization progress. Using Tikhonov regularization, the well-posedness, convergence properties and regularization parameter selection methods are studied. Based on an adjoint technique, we derive the exact Jacobian of the objective function and give an algorithm to reconstruct the adsorption isotherm. Finally, numerical simulations are given to show the feasibility and efficiency of the proposed regularization method.
We review phase-space simulation techniques for fermions, showing how a Gaussian operator basis leads to exact calculations of the evolution of a many-body quantum system in both real and imaginary time. We apply such techniques to the Hubbard model and to the problem of molecular dissociation of bosonic molecules into pairs of fermionic atoms.
This paper presents a piecewise constant level set method for the topology optimization of steady Navier- Stokes flow. Combining piecewise constant level set functions and artificial friction force, the optimization problem is formulated and analyzed based on a design variable. The topology sensitivities are computed by the adjoint method based on Lagrangian multipliers. In the optimization procedure, the piecewise constant level set function is updated by a new descent method, without the needing to solve the Hamilton-Jacobi equation. To achieve optimization, the piecewise constant level set method does not track the boundaries between the different materials but instead through the regional division, which can easily create small holes without topological derivatives. Furthermore, we make some attempts to avoid updating the Lagrangian multipliers and to deal with the constraints easily. The algorithm is very simple to implement, and it is possible to obtain the optimal solution by iterating a few steps. Several numerical examples for both two- and three-dimensional problems are provided, to demonstrate the validity and efficiency of the proposed method.
Neural-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning. This survey identifies several parallels across seven different dimensions between these two fields. These cannot only be used to characterize and position neural-symbolic artificial intelligence approaches but also to identify a number of directions for further research.
State-of-the-art probabilistic inference algorithms, such as variable elimination and search-based approaches, rely heavily on the order in which variables are marginalized. Finding the optimal ordering is an NPcomplete problem. This computational hardness has led to heuristics to find adequate variable orderings. However, these heuristics have mostly been targeting discrete random variables. We show how variable ordering heuristics from the discrete domain can be ported to the discrete-continuous domain. We equip the state-of-the-art F-XSDD(BR) solver for discrete-continuous problems with such heuristics. Additionally, we propose a novel heuristic called bottom-up min-fill (BU-MiF), yielding a solver capable of determining good variable orderings without having to rely on the user to provide such an ordering. We empirically demonstrate its performance on a set of benchmark problems.
Investigating the properties, explaining, and predicting the behaviour of a physical system described by a system (matrix) pencil often require the understanding of how canonical structure information of the system pencil may change, e.g., how eigenvalues coalesce or split apart, due to perturbations in the matrix pencil elements. Often these system pencils have different block-partitioning and / or symmetries. We study changes of the congruence canonical form of a complex skew-symmetric matrix pencil under small perturbations. The problem of computing the congruence canonical form is known to be ill-posed: both the canonical form and the reduction transformation depend discontinuously on the entries of a pencil. Thus it is important to know the canonical forms of all such pencils that are close to the investigated pencil. One way to investigate this problem is to construct the stratification of orbits and bundles of the pencils. To be precise, for any problem dimension we construct the closure hierarchy graph for congruence orbits or bundles. Each node (vertex) of the graph represents an orbit (or a bundle) and each edge represents the cover/closure relation. Such a relation means that there is a path from one node to another node if and only if a skew-symmetric matrix pencil corresponding to the first node can be transformed by an arbitrarily small perturbation to a skew-symmetric matrix pencil corresponding to the second node. From the graph it is straightforward to identify more degenerate and more generic nearby canonical structures. A necessary (but not sufficient) condition for one orbit being in the closure of another is that the first orbit has larger codimension than the second one. Therefore we compute the codimensions of the congruence orbits (or bundles). It is done via the solutions of an associated homogeneous system of matrix equations. The complete stratification is done by proving the relation between equivalence and congruence for the skew-symmetric matrix pencils. This relation allows us to use the known result about the stratifications of general matrix pencils (under strict equivalence) in order to stratify skew-symmetric matrix pencils under congruence. Matlab functions to work with skew-symmetric matrix pencils and a number of other types of symmetries for matrices and matrix pencils are developed and included in the Matrix Canonical Structure (MCS) Toolbox.
We study how elementary divisors and minimal indices of a skew-symmetric matrix polynomial of odd degree may change under small perturbations of the matrix coefficients. We investigate these changes qualitatively by constructing the stratifications (closure hierarchy graphs) of orbits and bundles for skew-symmetric linearizations. We also derive the necessary and sufficient conditions for the existence of a skew-symmetric matrix polynomial with prescribed degree, elementary divisors, and minimal indices.
Arnold [V.I. Arnold, On matrices depending on parameters, Russian Math. Surveys 26 (2) (1971) 29–43] constructed miniversal deformations of square complex matrices under similarity; that is, a simple normal form to which not only a given square matrix A but all matrices B close to it can be reduced by similarity transformations that smoothly depend on the entries of B. We construct miniversal deformations of matrices under congruence.
Matlab functions to work with the canonical structures for congru-ence and *congruence of matrices, and for congruence of symmetricand skew-symmetric matrix pencils are presented. A user can providethe canonical structure objects or create (random) matrix examplesetups with a desired canonical information, and compute the codi-mensions of the corresponding orbits: if the structural information(the canonical form) of a matrix or a matrix pencil is known it isused for the codimension computations, otherwise they are computednumerically. Some auxiliary functions are provided too. All thesefunctions extend the Matrix Canonical Structure Toolbox.
We study how small perturbations of general matrix polynomials may change their elementary divisors and minimal indices by constructing the closure hierarchy (stratification) graphs of matrix polynomials’ orbits and bundles. To solve this problem, we construct the stratification graphs for the first companion Fiedler linearization of matrix polynomials. Recall that the first companion Fiedler linearization as well as all the Fiedler linearizations is matrix pencils with particular block structures. Moreover, we show that the stratification graphs do not depend on the choice of Fiedler linearization which means that all the spaces of the matrix polynomial Fiedler linearizations have the same geometry (topology). This geometry coincides with the geometry of the space of matrix polynomials. The novel results are illustrated by examples using the software tool StratiGraph extended with associated new functionality.
We study how small perturbations of a skew-symmetric matrix pencil may change its canonical form under congruence. This problem is also known as the stratification problem of skew-symmetric matrix pencil orbits and bundles. In other words, we investigate when the closure of the congruence orbit (or bundle) of a skew-symmetric matrix pencil contains the congruence orbit (or bundle) of another skew-symmetric matrix pencil. This theory relies on our main theorem stating that a skew-symmetric matrix pencil A-λB can be approximated by pencils strictly equivalent to a skew-symmetric matrix pencil C-λD if and only if A-λB can be approximated by pencils congruent to C-λD.
A widely used form of test matrix is the randsvd matrix constructed as the product A = U Sigma V*, where U and V are random orthogonal or unitary matrices from the Haar distribution and Sigma is a diagonal matrix of singular values. Such matrices are random but have a specified singular value distribution. The cost of forming an m x n randsvd matrix is m(3) + n(3) flops, which is prohibitively expensive at extreme scale; moreover, the randsvd construction requires a significant amount of communication, making it unsuitable for distributed memory environments. By dropping the requirement that U and V be Haar distributed and that both be random, we derive new algorithms for forming A that have cost linear in the number of matrix elements and require a low amount of communication and synchronization. We specialize these algorithms to generating matrices with a specified 2-norm condition number. Numerical experiments show that the algorithms have excellent efficiency and scalability.
We propose a two-parameter family of nonsymmetric dense n x n matrices A(alpha, beta) for which LU factorization without pivoting is numerically stable, and we show how to choose alpha and beta to achieve any value of the infinity-norm condition number. The matrix A(alpha, beta) can be formed from a simple formula in O(n(2)) flops. The matrix is suitable for use in the HPL-AI Mixed-Precision Benchmark, which requires an extreme scale test matrix (dimension n > 10(7)) that has a controlled condition number and can be safely used in LU factorization without pivoting. It is also of interest as a general-purpose test matrix.
We explore the floating-point arithmetic implemented in the NVIDIA tensor cores, which are hardware accelerators for mixed-precision matrix multiplication available on the Volta, Turing, and Ampere microarchitectures. Using Volta V100, Turing T4, and Ampere A100 graphics cards, we determine what precision is used for the intermediate results, whether subnormal numbers are supported, what rounding mode is used, in which order the operations underlying the matrix multiplication are performed, and whether partial sums are normalized. These aspects are not documented by NVIDIA, and we gain insight by running carefully designed numerical experiments on these hardware units. Knowing the answers to these questions is important if one wishes to: (1) accurately simulate NVIDIA tensor cores on conventional hardware; (2) understand the differences between results produced by code that utilizes tensor cores and code that uses only IEEE 754-compliant arithmetic operations; and (3) build custom hardware whose behavior matches that of NVIDIA tensor cores. As part of this work we provide a test suite that can be easily adapted to test newer versions of the NVIDIA tensor cores as well as similar accelerators from other vendors, as they become available. Moreover, we identify a non-monotonicity issue affecting floating point multi-operand adders if the intermediate results are not normalized after each step.
We develop an efficient algorithm for sampling the eigenvalues of random matrices distributed according to the Haar measure over the orthogonal or unitary group. Our technique samples directly a factorization of the Hessenberg form of such matrices, and then computes their eigenvalues with a tailored core-chasing algorithm. This approach requires a number of floating-point operations that is quadratic in the order of the matrix being sampled, and can be adapted to other matrix groups. In particular, we explain how it can be used to sample the Haar measure over the special orthogonal and unitary groups and the conditional probability distribution obtained by requiring the determinant of the sampled matrix be a given complex number on the complex unit circle.
Reconstructing the homogenized coefficient, which is also called the G-limit, in elliptic equations involving heterogeneous media is a typical nonlinear ill-posed inverse problem. In this work, we develop a numerical technique to determine G-limit that does not rely on any periodicity assumption. The approach is a technique that separates the computation of the deviation of the G-limit from the weak -limit of the sequence of coefficients from the latter. Moreover, to tackle the ill-posedness, based on the classical Tikhonov regularization scheme we develop several strategies to regularize the introduced method. Various numerical tests for both standard and non-standard homogenization problems are given to show the efficiency and feasibility of the proposed method.
In this paper, we consider optimal portfolio selection when the covariance matrix of the asset returns is rank-deficient. For this case, the original Markowitz’ problem does not have a unique solution. The possible solutions belong to either two subspaces namely the range- or nullspace of the covariance matrix. The former case has been treated elsewhere but not the latter. We derive an analytical unique solution, assuming the solution is in the null space, that is risk-free and has minimum norm. Furthermore, we analyse the iterative method which is called the discrete functional particle method in the rank-deficient case. It is shown that the method is convergent giving a risk-free solution and we derive the initial condition that gives the smallest possible weights in the norm. Finally, simulation results on artificial problems as well as real-world applications verify that the method is both efficient and stable.
In this paper, we consider optimal portfolio selection when the covariance matrix of the asset returns is rank-deficient. For this case, the original Markowitz' problem does not have a unique solution. The possible solutions belong to either two subspaces namely the range- or nullspace of the covariance matrix. The former case has been treated elsewhere but not the latter. We derive an analytical unique solution, assuming the solution is in the null space, that is risk-free and has minimum norm. Furthermore, we analyse the iterative method which is called the discrete functional particle method in the rank-deficient case. It is shown that the method is convergent giving a risk-free solution and we derive the initial condition that gives the smallest possible weights in the norm. Finally, simulation results on artificial problems as well as real-world applications verify that the method is both efficient and stable.
We present new approaches for solving constrained multicomponent nonlinear Schrödinger equations in arbitrary dimensions. The idea is to introduce an artificial time and solve an extended damped second order dynamic system whose stationary solution is the solution to the time-independent nonlinear Schrödinger equation. Constraints are often considered by projection onto the constraint set, here we include them explicitly into the dynamical system. We show the applicability and efficiency of the methods on examples of relevance in modern physics applications.
We present an approach for solving optimization problems with or without constrains which we call Dynamical Functional Particle Method (DFMP). The method consists of formulating the optimization problem as a second order damped dynamical system and then applying symplectic method to solve it numerically. In the first part of the chapter, we give an overview of the method and provide necessary mathematical background. We show that DFPM is a stable, efficient, and given the optimal choice of parameters, competitive method. Optimal parameters are derived for linear systems of equations, linear least squares, and linear eigenvalue problems. A framework for solving nonlinear problems is developed and numerically tested. In the second part, we adopt the method to several important applications such as image analysis, inverse problems for partial differential equations, and quantum physics. At the end, we present open problems and share some ideas of future work on generalized (nonlinear) eigenvalue problems, handling constraints with reflection, global optimization, and nonlinear ill-posed problems.
Denna bok riktar sig till gymnasieelever som vill fördjupa sig i ämnet RSA-kryptografi . RSA-kryptografi är en avancerad metod för att kommunicera med hemliga meddelanden och används flitigt inom t.ex. bankvärlden. När du handlar med ditt kort eller använder din e-legitimation används RSA-kryptogra fi för att allt du gör ska vara skyddat och säkert. Vid stora transaktioner mellan olika banker används också RSA-kryptogra fi för att både den som betalar och den som får betalt ska vara säkra att allt går rätt till.Boken är uppdelad i fyra kapitel. Kapitel 3 och 4 är betydligt mer avancerade än kapitel 1 och 2. Kapitel 1 består mestadels av exempel och övningar som behandlar matematiken som krävs för att kunna utföra RSA-kryptogra fi med små tal. Kapitel 2 använder matematiken i kapitel 1 för att genom exempel och övingar metodiskt lära ut hur RSA-kryptogra fi med små tal går till. Kapitel 3 visar matematiken som ligger till grund för att RSA-kryptografi fungerar. Detta visas med hjälp av exempel, satser, förtydligade bevis samt några enstaka övningar. Kapitel 4 förklarar varför RSA-kryptografi är säkert och enkelt att använda. Primtalstester utgör det viktigaste ämnet i detta sista kapitel.
In this paper, the uncertainty property is represented by Z-number as the coefficients and variables of the fuzzy equation. This modification for the fuzzy equation is suitable for nonlinear system modeling with uncertain parameters. Here, we use fuzzy equations as the models for the uncertain nonlinear systems. The modeling of the uncertain nonlinear systems is to find the coefficients of the fuzzy equation. However, it is very difficult to obtain Z-number coefficients of the fuzzy equations.
Taking into consideration the modeling case at par with uncertain nonlinear systems, the implementation of neural network technique is contributed in the complex way of dealing the appropriate coefficients of the fuzzy equations. We use the neural network method to approximate Z-number coefficients of the fuzzy equations.
A survey of the methodologies associated with the modeling and control of uncertain nonlinear systems has been given due importance in this paper. The basic criteria that highlights the work is relied on the various patterns of techniques incorporated for the solutions of fuzzy equations that corresponds to fuzzy controllability subject. The solutions which are generated by these equations are considered to be the controllers. Currently, numerical techniques have come out as superior techniques in order to solve these types of problems. The implementation of neural networks technique is contributed in the complex way of dealing the appropriate coefficients and solutions of the fuzzy systems.
Uncertain nonlinear systems can be modeled with fuzzy differential equations (FDEs) and the solutions of these equations are applied to analyze many engineering problems. However, it is very difficult to obtain solutions of FDEs. In this book chapter, the solutions of FDEs are approximated by utilizing the fuzzy Sumudu transform (FST) method. Here, the uncertainties are in the sense of fuzzy numbers and Z-numbers. Important theorems are laid down to illustrate the properties of FST. This new technique is compared with Average Euler method and Max-Min Euler method. The theoretical analysis and simulation results show that the FST method is effective in estimating the solutions of FDEs.
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors. We show how residue numbers can be represented as high-dimensional vectors in a manner that allows algebraic operations to be performed with component-wise, parallelizable operations on the vector elements. The resulting framework, when combined with an efficient method for factorizing high-dimensional vectors, can represent and operate on numerical values over a large dynamic range using vastly fewer resources than previous methods, and it exhibits impressive robustness to noise. We demonstrate the potential for this framework to solve computationally difficult problems in visual perception and combinatorial optimization, showing improvement over baseline methods. More broadly, the framework provides a possible account for the computational operations of grid cells in the brain, and it suggests new machine learning architectures for representing and manipulating numerical data.
Competitive adsorption isotherms must be estimated in order to simulate and optimize modern continuous modes of chromatography in situations where experimental trial-and-error approaches are too complex and expensive. The inverse method is a numeric approach for the fast estimation of adsorption isotherms directly from overloaded elution profiles. However, this identification process is usually ill-posed. Moreover, traditional model-based inverse methods are restricted by the need to choose an appropriate adsorption isotherm model prior to estimate, which might be very hard for complicated adsorption behavior. In this study, we develop a Kohn–Vogelius formulation for the model-free adsorption isotherm estimation problem. The solvability and convergence for the proposed inverse method are studied. In particular, using a problem-adapted adjoint, we obtain a convergence rate under substantially weaker and more realistic conditions than are required by the general theory. Based on the adjoint technique, a numerical algorithm for solving the proposed optimization problem is developed. Numerical tests for both synthetic and real-world problems are given to show the efficiency of the proposed regularization method.
In this work, based on the collage theorem, we develop a new numerical approach to reconstruct the locations of discontinuity of the conduction coefficient in elliptic partial differential equations (PDEs) with inaccurate measurement data and coefficient value. For a given conductivity coefficient, one can construct a contraction mapping such that its fixed point is just the gradient of a solution to the elliptic system. Therefore, the problem of reconstructing a conductivity coefficient in PDEs can be considered as an approximation of the observation data by the fixed point of a contraction mapping. By collage theorem, we translate it to seek a contraction mapping that keeps the observation data as close as possible to itself, which avoids solving adjoint problems when applying the gradient descent method to the corresponding optimization problem. Moreover, the total variation regularizing strategy is applied to tackle the ill-posedness and the parametric level set technique is adopted to represent the discontinuity of the conductivity coefficient. Various numerical simulations are given to show the efficiency of the proposed method.
The dynamical functional particle method(DFPM) is a method for solving equations, e.g. PDEs, using a second order damped dynamical system. We show how the method can be extended to include constraints both explicitly as global constraints and adding the constraints as additional damped dynamical equations. These methods are implemented in Comsol and we show numerical tests for finding the stationary solution of a nonlinear heat equation with and without constraints (global and dynamical). The results show that DFPM is a very general and robust way of solving PDEs and it should be of interest to implement the approach more generally in Comsol.
For certain materials science scenarios arising in rubber technology, one-dimensional moving boundary problems with kinetic boundary conditions are capable of unveiling the large-time behavior of the diffusants penetration front, giving a direct estimate on the service life of the material. Driven by our interest in estimating how a finite number of diffusant molecules penetrate through a dense rubber, we propose a random walk algorithm to approximate numerically both the concentration profile and the location of the sharp penetration front. The proposed scheme decouples the target evolution system in two steps: (i) the ordinary differential equation corresponding to the evaluation of the speed of the moving boundary is solved via an explicit Euler method, and (ii) the associated diffusion problem is solved by a random walk method. To verify the correctness of our random walk algorithm we compare the resulting approximations to computational results based on a suitable finite element approach with a controlled convergence rate. Our numerical results recover well penetration depth measurements of a controlled experiment designed specifically for this setting.
Applications like geometric reverse engineering, robot vision and automatic inspection require sets of points to be measured from the surfaces of objects and then processed by segmentation and fitting algorithms to establish shape parameters of interest. In industrial applications where speed, reliability and automatic operation is of interest a measuring system based on a laser profile scanner mounted on an industrial robot can be of interest. In earlier publications we have presented such a system and also a segmentation algorithm for planar surfaces using 2D profile data in combination with robot poses. Due to the data reduction offered by this approach the segmentation algorithm computes faster than algorithms based on 3D point sets alone. Encouraged by the results we have now developed a segmentation algorithm for two different quadric surfaces also based on 2D profiles in combination with robot poses. This paper presents the new algorithm together with test results and also an interesting observation that points to future work.
The Gaussian phase-space representation can be used to implement quantum dynamics for fermionic particles numerically. To improve numerical results, we explore the use of dynamical diffusion gauges in such implementations. This is achieved by benchmarking quantum dynamics of few-body systems against independent exact solutions. A diffusion gauge is implemented here as a so-called noise-matrix, which satisfies a matrix equation defined by the corresponding Fokker-Planck equation of the phase-space representation. For the physical systems with fermionic particles considered here, the numerical evaluation of the new diffusion gauges allows us to double the practical simulation time, compared with hitherto known analytic noise-matrices. This development may have far reaching consequences for future quantum dynamical simulations of many-body systems.
We study the rotational properties of a two-component Bose-Einstein condensed gas of distinguishable atoms which are confined in a ring potential using both the mean-field approximation, as well as the method of diagonalization of the many-body Hamiltonian. We demonstrate that the angular momentum may be given to the system either via single-particle, or "collective" excitation. Furthermore, despite the complexity of this problem, under rather typical conditions the dispersion relation takes a remarkably simple and regular form. Finally, we argue that under certain conditions the dispersion relation is determined via collective excitation. The corresponding many-body state, which, in addition to the interaction energy minimizes also the kinetic energy, is dictated by elementary number theory.
Motivated by numerous experiments on Bose-Einstein condensed atoms which have been performed in tight trapping potentials of various geometries (elongated and/or toroidal/annular), we develop a general method which allows us to reduce the corresponding three-dimensional Gross-Pitaevskii equation for the order parameter into an effectively one-dimensional equation, taking into account the interactions (i.e., treating the width of the transverse profile variationally) and the curvature of the trapping potential. As an application of our model we consider atoms which rotate in a toroidal trapping potential. We evaluate the state of lowest energy for a fixed value of the angular momentum within various approximations of the effectively one-dimensional model and compare our results with the full solution of the three-dimensional problem, thus getting evidence for the accuracy of our model.
A popular way to control for confounding in observational studies is to identify clusters of individuals (e.g., twin pairs), such that a large set of potential confounders are constant (shared) within each cluster. By studying the exposure-outcome association within clusters, we are in effect controlling for the whole set of shared confounders. An increasingly popular analysis tool is the between-within (BW) model, which decomposes the exposure-outcome association into a 'within-cluster effect' and a 'between-cluster effect'. BW models are relatively common for nonsurvival outcomes and have been studied in the theoretical literature. Although it is straightforward to use BW models for survival outcomes, this has rarely been carried out in practice, and such models have not been studied in the theoretical literature. In this paper, we propose a gamma BW model for survival outcomes. We compare the properties of this model with the more standard stratified Cox regression model and use the proposed model to analyze data from a twin study of obesity and mortality. We find the following: (i) the gamma BW model often produces a more powerful test of the 'within-cluster effect' than stratified Cox regression; and (ii) the gamma BW model is robust against model misspecification, although there are situations where it could give biased estimates.
We investigate the dynamics of magnetic vortices in type II superconductors with normal state pinning sites using the Ginzburg–Landau equations. Simulation results demonstrate hopping of vortices between pinning sites, influenced by external magnetic fields and external currents. The system is highly nonlinear and the vortices show complex nonlinear dynamical behaviour.
We present here the theoretical results and numerical analysis of a regularization method for the inverse problem of determining the rate constant distribution from biosensor data. The rate constant distribution method is a modern technique to study binding equilibrium and kinetics for chemical reactions. Finding a rate constant distribution from biosensor data can be described as a multidimensional Fredholm integral equation of the first kind, which is a typical ill-posed problem in the sense of J. Hadamard. By combining regularization theory and the goal-oriented adaptive discretization technique,we develop an Adaptive Interaction Distribution Algorithm (AIDA) for the reconstruction of rate constant distributions. The mesh refinement criteria are proposed based on the a posteriori error estimation of the finite element approximation. The stability of the obtained approximate solution with respect to data noise is proven. Finally, numerical tests for both synthetic and real data are given to show the robustness of the AIDA.
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
In this paper, we consider an inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary conditions. The unknown source term is to be determined by additional boundary data. This problem is ill-posed since the dimensionality of the boundary is lower than the dimensionality of the inner domain. To overcome the ill-posed nature, using the a priori information (sourcewise representation), and based on the coupled complex boundary method, we propose a coupled complex boundary expanding compacts method (CCBECM). A finite element method is used for the discretization of CCBECM. The regularization properties of CCBECM for both the continuous and discrete versions are proved. Moreover, an a posteriori error estimate of the obtained finite element approximate solution is given and calculated by a projected gradient algorithm. Finally, numerical results show that the proposed method is stable and effective.
In this paper, we present an algorithm to be used by an inspectionrobot to produce a gas distribution map and localize gas sources ina large complex environment. The robot, equipped with a remotegas sensor, measures the total absorption of a tuned laser beam andreturns integral gas concentrations. A mathematical formulation ofsuch measurement facility is a sequence of Radon transforms,which isa typical ill-posed problem. To tackle the ill-posedness, we developa new regularization method based on the sparse representationproperty of gas sources and the adaptive finite-element method. Inpractice, only a discrete model can be applied, and the quality ofthe gas distributionmap depends on a detailed 3-D world model thatallows us to accurately localize the robot and estimate the paths of thelaser beam. In this work, using the positivity ofmeasurements and theprocess of concentration, we estimate the lower and upper boundsof measurements and the exact continuous model (mapping fromgas distribution to measurements), and then create a more accuratediscrete model of the continuous tomography problem. Based onadaptive sparse regularization, we introduce a new algorithm thatgives us not only a solution map but also a mesh map. The solutionmap more accurately locates gas sources, and the mesh map providesthe real gas distribution map. Moreover, the error estimation of theproposed model is discussed. Numerical tests for both the syntheticproblem and practical problem are given to show the efficiency andfeasibility of the proposed algorithm.
In this paper, we establish an initial theory regarding the second-order asymptotical regularization (SOAR) method for the stable approximate solution of ill-posed linear operator equations in Hilbert spaces, which are models for linear inverse problems with applications in the natural sciences, imaging and engineering. We show the regularizing properties of the new method, as well as the corresponding convergence rates. We prove that, under the appropriate source conditions and by using Morozov's conventional discrepancy principle, SOAR exhibits the same power-type convergence rate as the classical version of asymptotical regularization (Showalter's method). Moreover, we propose a new total energy discrepancy principle for choosing the terminating time of the dynamical solution from SOAR, which corresponds to the unique root of a monotonically non-increasing function and allows us to also show an order optimal convergence rate for SOAR. A damped symplectic iterative regularizing algorithm is developed for the realization of SOAR. Several numerical examples are given to show the accuracy and the acceleration effect of the proposed method. A comparison with other state-of-the-art methods are provided as well.
This article is devoted to a Lagrange principle application to an inverse problem of a two-dimensional integral equation of the first kind with a positive kernel. To tackle the ill-posedness of this problem, a new numerical method is developed. The optimal and regularization properties of this method are proved. Moreover, a pseudo-optimal error of the proposed method is considered. The efficiency and applicability of this method are demonstrated in a numerical example of an image deblurring problem with noisy data.
This work deals with the one-dimensional Stefan problem with a general time-dependent boundary condition at the fixed boundary. Stochastic solutions are obtained using discrete random walks, and the results are compared with analytic formulae when they exist, otherwise with numerical solutions from a finite difference method. The innovative part is to model the moving boundary with a random walk method. The results show statistical convergence for many random walkers when Δx→0. Stochastic methods are very competitive in large domains in higher dimensions and has the advantages of generality and ease of implementation. The stochastic method suffers from that longer execution times are required for increased accuracy. Since the code is easily adapted for parallel computing, it is possible to speed up the calculations. Regarding applications for Stefan problems, they have historically been used to model the dynamics of melting ice, and we give such an example here where the fixed boundary condition follows data from observed day temperatures at Örebro airport. Nowadays, there are a large range of examples of applications, such as climate models, the diffusion of lithium-ions in lithium-ion batteries and modelling steam chambers for petroleum extraction.
We consider the exponential matrix representing the dynamics of the Fermi-Bose model in an undepleted bosonic field approximation. A recent application of this model is molecular dimers dissociating into its atomic compounds. The problem is solved in D spatial dimensions by dividing the system matrix into blocks with generalizations of Hankel matrices, here referred to as D-block-Hankel matrices. The method is practically useful for treating large systems, i.e. dense computational grids or higher spatial dimensions, either on a single standard computer or a cluster. In particular the results can be used for studies of three-dimensional physical systems of arbitrary geometry. We illustrate the generality of our approach by giving numerical results for the dynamics of Glauber type atomic pair correlation functions for a non-isotropic three-dimensional harmonically trapped molecular Bose-Einstein condensate.
This article explains and illustrates the use of a set of coupled dynamical equations, second order in a fictitious time, which converges to solutions of stationary Schrödinger equations with additional constraints. In fact, the method is general and can solve constrained minimization problems in many fields. We present the method for introductory applications in quantum mechanics including three qualitative different numerical examples: the radial Schrödinger equation for the hydrogen atom; the two-dimensional harmonic oscillator with degenerate excited states; and a non-linear Schrödinger equation for rotating states. The presented method is intuitive, with analogies in classical mechanics for damped oscillators, and easy to implement, either in own coding, or with software for dynamical systems. Hence, we find it suitable to introduce it in a continuation course in quantum mechanics or generally in applied mathematics courses which contain computational parts. The undergraduate student can for example use our derived results and the code (supplemental material) to study the Schrödinger equation in 1D for any potential. The graduate student and the general physicist can work from our three examples to derive their own results for other models including other global constraints.
The interpretation of nuclear magnetic resonance (NMR) data is of interest in a number of fields. In Ögren [Eur. Phys. J. B (2014) 87: 255] local boundary conditions for random walk simulations of NMR relaxation in digital domains were presented. Here, we have applied those boundary conditions to large, three-dimensional (3D) porous media samples. We compared the random walk results with known solutions and then applied them to highly structured 3D domains, from images derived using synchrotron radiation CT scanning of North Sea chalk samples. As expected, there were systematic errors caused by digitalization of the pore surfaces so we quantified those errors, and by using linear local boundary conditions, we were able to significantly improve the output. We also present a technique for treating numerical data prior to input into the ESPRIT algorithm for retrieving Laplace components of time series from NMR data (commonly called T-inversion).