猿问

超高斯拟合

我必须研究激光束轮廓。为此,我需要为我的数据找到一个超高斯曲线拟合。超高斯方程:

I * exp(- 2 * ((x - x0) /sigma)^P)

其中考虑了平顶激光束曲线的特性。P

我开始用Python对我的曲线进行简单的高斯拟合。拟合返回一条高斯曲线,其中 和 的值被优化。(我用了函数curve_fit)高斯曲线方程:Ix0sigma

I * exp(-(x - x0)^2 / (2 * sigma^2))

现在,我想向前迈出一步。我想做超高斯曲线拟合,因为我需要考虑光束的平顶特性。因此,我需要一个同时优化 P 参数的拟合。

有人知道如何用Python做一个超级高斯曲线拟合吗?

我知道有一种方法可以用wolfram mathematica做一个超级高斯拟合,这不是开源的。我没有。因此,我还想知道是否有人知道一个开源软件,因此可以进行超级高斯曲线拟合或执行wolfram mathematica。


阿晨1998
浏览 457回答 4
4回答

蝴蝶不菲

好吧,您需要编写一个函数来计算参数化的超高斯函数,并使用它来对数据进行建模,例如.作为LMFIT(https://lmfit.github.io/lmfit-py/)的主要作者,它提供了一个高级接口来拟合和曲线拟合,我建议尝试该库。使用这种方法,超高斯和用于拟合数据的模型函数可能如下所示:scipy.optimize.curve_fitimport numpy as np&nbsp;&nbsp;from lmfit import Model&nbsp; &nbsp;def super_gaussian(x, amplitude=1.0, center=0.0, sigma=1.0, expon=2.0):&nbsp; &nbsp; """super-Gaussian distribution&nbsp; &nbsp; super_gaussian(x, amplitude, center, sigma, expon) =&nbsp; &nbsp; &nbsp; &nbsp; (amplitude/(sqrt(2*pi)*sigma)) * exp(-abs(x-center)**expon / (2*sigma**expon))&nbsp; &nbsp; """&nbsp; &nbsp; sigma = max(1.e-15, sigma)&nbsp; &nbsp; return ((amplitude/(np.sqrt(2*np.pi)*sigma))&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; * np.exp(-abs(x-center)**expon / 2*sigma**expon))# generate some test datax = np.linspace(0, 10, 101)y = super_gaussian(x, amplitude=7.1, center=4.5, sigma=2.5, expon=1.5)y += np.random.normal(size=len(x), scale=0.015)# make Model from the super_gaussian functionmodel = Model(super_gaussian)# build a set of Parameters to be adjusted in fit, named from the arguments&nbsp;# of the model function (super_gaussian), and providing initial valuesparams = model.make_params(amplitude=1, center=5, sigma=2., expon=2)# you can place min/max bounds on parametersparams['amplitude'].min = 0params['sigma'].min = 0params['expon'].min = 0params['expon'].max = 100# note: if you wanted to make this strictly Gaussian, you could set&nbsp;# expon=2&nbsp; and prevent it from varying in the fit:### params['expon'].value = 2.0### params['expon'].vary = False# now do the fitresult = model.fit(y, params, x=x)# print out the fit statistics, best-fit parameter values and uncertaintiesprint(result.fit_report())# plot resultsimport matplotlib.pyplot as pltplt.plot(x, y, label='data')plt.plot(x, result.best_fit, label='fit')plt.legend()plt.show()这将打印一个报告,如[[Model]]&nbsp; &nbsp; Model(super_gaussian)[[Fit Statistics]]&nbsp; &nbsp; # fitting method&nbsp; &nbsp;= leastsq&nbsp; &nbsp; # function evals&nbsp; &nbsp;= 53&nbsp; &nbsp; # data points&nbsp; &nbsp; &nbsp; = 101&nbsp; &nbsp; # variables&nbsp; &nbsp; &nbsp; &nbsp; = 4&nbsp; &nbsp; chi-square&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;= 0.02110713&nbsp; &nbsp; reduced chi-square = 2.1760e-04&nbsp; &nbsp; Akaike info crit&nbsp; &nbsp;= -847.799755&nbsp; &nbsp; Bayesian info crit = -837.339273[[Variables]]&nbsp; &nbsp; amplitude:&nbsp; 6.96892162 +/- 0.09939812 (1.43%) (init = 1)&nbsp; &nbsp; center:&nbsp; &nbsp; &nbsp;4.50181661 +/- 0.00217719 (0.05%) (init = 5)&nbsp; &nbsp; sigma:&nbsp; &nbsp; &nbsp; 2.48339218 +/- 0.02134446 (0.86%) (init = 2)&nbsp; &nbsp; expon:&nbsp; &nbsp; &nbsp; 3.25148164 +/- 0.08379431 (2.58%) (init = 2)[[Correlations]] (unreported correlations are < 0.100)&nbsp; &nbsp; C(amplitude, sigma) =&nbsp; 0.939&nbsp; &nbsp; C(sigma, expon)&nbsp; &nbsp; &nbsp;= -0.774&nbsp; &nbsp; C(amplitude, expon) = -0.745并生成这样的情节

猛跑小猪

纽维尔的答案非常适合我。但要小心!在函数定义中,括号在指数的商中是模糊的super_gaussiandef super_gaussian(x, amplitude=1.0, center=0.0, sigma=1.0, expon=2.0):&nbsp; &nbsp; ...&nbsp; &nbsp; return ((amplitude/(np.sqrt(2*np.pi)*sigma))&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; * np.exp(-abs(x-center)**expon / 2*sigma**expon))应替换为def super_gaussian(x, amplitude=1.0, center=0.0, sigma=1.0, expon=2.0):&nbsp; &nbsp; ...&nbsp; &nbsp; return (amplitude/(np.sqrt(2*np.pi)*sigma))&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;* np.exp(-abs(x-center)**expon / (2*sigma**expon))然后是超高斯函数的FWHM,它写道:FWHM = 2.*sigma*(2.*np.log(2.))**(1/expon)经过精心计算,与情节非常一致。我很抱歉写这篇文章作为答案。但是我的声誉得分很低,无法为M Newville帖子添加评论...

largeQ

将 y(x)=a *exp(-b *(x-c)**p) 拟合到参数 a,b,c,p 的数据。下面的数值演算示例显示了一种非迭代方法,该方法不需要对参数进行初始猜测。这在应用一般原理中解释在论文中:https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales在本文的当前版本中,超高斯的情况没有得到明确的处理。没有必要阅读论文,因为下面的屏幕副本显示了整个细节的微积分。请注意,数值结果 a,b,c,p 可用作回归的经典迭代测量的初始值。注意:考虑的线性方程是:A,B,C,D是由于线性回归而要计算的参数。积分的数值S(k)通过从给定数据进行数值积分直接计算(如上例所示)。

慕婉清6462132

这是超高斯的函数&nbsp; &nbsp; def super_gaussian(x, amp, x0, sigma):&nbsp; &nbsp; &nbsp; &nbsp; rank = 2&nbsp; &nbsp; &nbsp; &nbsp; return amp * ((np.exp(-(2 ** (2 * rank - 1)) * np.log(2) * (((x - x0) ** 2) / ((sigma) ** 2)) ** (rank))) ** 2)然后你需要用 scipy 优化曲线拟合来调用它,如下所示:from scipy import optimizeopt, _ = optimize.curve_fit(super_gaussian, x, y)vals = super_gaussian(x, *opt)“vals”是你需要绘制的,那就是拟合的超高斯函数。这是您在&nbsp;rank=1&nbsp;时得到的:排名 = 2:排名 = 3:
随时随地看视频慕课网APP

相关分类

Python
我要回答