我有两个numpy数组light_points和time_points,我想对这些数据使用一些时间序列分析方法。
然后我尝试了这个:
import statsmodels.api as sm
import pandas as pd
tdf = pd.DataFrame({'time':time_points[:]})
rdf = pd.DataFrame({'light':light_points[:]})
rdf.index = pd.DatetimeIndex(freq='w',start=0,periods=len(rdf.light))
#rdf.index = pd.DatetimeIndex(tdf['time'])
这可以工作,但没有做正确的事情。实际上,测量值的时间间隔是不均匀的,如果我只是将time_points pandas DataFrame声明为我的框架的索引,我会得到一个错误:
rdf.index = pd.DatetimeIndex(tdf['time'])
decomp = sm.tsa.seasonal_decompose(rdf)
elif freq is None:
raise ValueError("You must specify a freq or x must be a pandas object with a timeseries index")
ValueError: You must specify a freq or x must be a pandas object with a timeseries index
我不知道如何纠正这个问题。此外,似乎pandas的TimeSeries
已被弃用。
我尝试了这个:
rdf = pd.Series({'light':light_points[:]})
rdf.index = pd.DatetimeIndex(tdf['time'])
但它给我一个长度不匹配的错误:
ValueError: Length mismatch: Expected axis has 1 elements, new values have 122 elements
尽管如此,我不明白这是从哪里来的,因为rdf[‘light’]和tdf[‘time’]的长度是相同的…
最后,我尝试通过定义我的rdf为pandas Series来解决:
rdf = pd.Series(light_points[:],index=pd.DatetimeIndex(time_points[:]))
然后我得到这个:
ValueError: You must specify a freq or x must be a pandas object with a timeseries index
然后,我尝试用以下方式替换索引:
pd.TimeSeries(time_points[:])
它在seasonal_decompose方法行上给我一个错误:
AttributeError: 'Float64Index' object has no attribute 'inferred_freq'
我如何处理不均匀间隔的数据?我在考虑通过在现有值之间添加许多未知值来创建一个近似均匀间隔的时间数组,并使用插值来“评估”这些点,但我认为可能有更清洁和更简单的解决方案。
回答:
seasonal_decompose()
需要一个freq
,它可以作为DateTimeIndex
元信息的一部分提供,可以通过pandas.Index.inferred_freq
推断出来,或者由用户作为一个integer
提供,表示每个周期的周期数。例如,月度为12(来自seasonal_mean
的docstring
):
def seasonal_decompose(x, model="additive", filt=None, freq=None): """ Parameters ---------- x : array-like Time series model : str {"additive", "multiplicative"} Type of seasonal component. Abbreviations are accepted. filt : array-like The filter coefficients for filtering out the seasonal component. The default is a symmetric moving average. freq : int, optional Frequency of the series. Must be used if x is not a pandas object with a timeseries index.
为了说明 – 使用随机样本数据:
length = 400
x = np.sin(np.arange(length)) * 10 + np.random.randn(length)
df = pd.DataFrame(data=x, index=pd.date_range(start=datetime(2015, 1, 1), periods=length, freq='w'), columns=['value'])
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 400 entries, 2015-01-04 to 2022-08-28
Freq: W-SUN
decomp = sm.tsa.seasonal_decompose(df)
data = pd.concat([df, decomp.trend, decomp.seasonal, decomp.resid], axis=1)
data.columns = ['series', 'trend', 'seasonal', 'resid']
Data columns (total 4 columns):
series 400 non-null float64
trend 348 non-null float64
seasonal 400 non-null float64
resid 348 non-null float64
dtypes: float64(4)
memory usage: 15.6 KB
到目前为止,一切顺利 – 现在从DateTimeIndex
中随机删除元素以创建不均匀间隔的数据:
df = df.iloc[np.unique(np.random.randint(low=0, high=length, size=length * .8))]
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 222 entries, 2015-01-11 to 2022-08-21
Data columns (total 1 columns):
value 222 non-null float64
dtypes: float64(1)
memory usage: 3.5 KB
df.index.freq
None
df.index.inferred_freq
None
在这些数据上运行seasonal_decomp
“有效”:
decomp = sm.tsa.seasonal_decompose(df, freq=52)
data = pd.concat([df, decomp.trend, decomp.seasonal, decomp.resid], axis=1)
data.columns = ['series', 'trend', 'seasonal', 'resid']
DatetimeIndex: 224 entries, 2015-01-04 to 2022-08-07
Data columns (total 4 columns):
series 224 non-null float64
trend 172 non-null float64
seasonal 224 non-null float64
resid 172 non-null float64
dtypes: float64(4)
memory usage: 8.8 KB
问题是 – 结果有多有用。即使在数据中没有间隙的情况下,这也使得季节性模式的推断变得复杂(参见发布说明中.interpolate()
的示例使用),statsmodels
对这一过程的描述如下:
Notes ----- This is a naive decomposition. More sophisticated methods should be preferred. The additive model is Y[t] = T[t] + S[t] + e[t] The multiplicative model is Y[t] = T[t] * S[t] * e[t] The seasonal component is first removed by applying a convolution filter to the data. The average of this smoothed series for each period is the returned seasonal component.