A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://stackoverflow.com/questions/22244383/pandas-df-refill-adding-two-columns-of-different-shape below:

python - Pandas: df.refill, adding two columns of different shape

You just need to set the index first, otherwise what you were doing was correct. You can't directly add a Series of datetimes (e.g the df.Time) and and index range. You want a union (so you can be explicity and use .union or convert to an index, which '+' does by default between 2 indexes).

In [35]: intervals = np.random.randint(0,1000,size=100).cumsum()

In [36]: df = DataFrame({'time' : [ Timestamp('20140101')+pd.offsets.Milli(i) for i in intervals ],
                         'value' : np.random.randn(len(intervals))})

In [37]: df.head()
Out[37]: 
                        time     value
0 2014-01-01 00:00:00.946000 -0.322091
1 2014-01-01 00:00:01.127000  0.887412
2 2014-01-01 00:00:01.690000  0.537789
3 2014-01-01 00:00:02.332000  0.311556
4 2014-01-01 00:00:02.335000  0.273509

[5 rows x 2 columns]

In [40]: date_range('20140101 00:00:00','20140101 01:00:00',freq='s')
Out[40]: 
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 00:00:00, ..., 2014-01-01 01:00:00]
Length: 3601, Freq: S, Timezone: None

 In [38]: new_range = date_range('20140101 00:00:00','20140101 01:00:00',freq='s') + Index(df.time)

In [39]: new_range
Out[39]: 
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01 00:00:00, ..., 2014-01-01 01:00:00]
Length: 3701, Freq: None, Timezone: None

In [42]: df.set_index('time').reindex(new_range).head()
Out[42]: 
                               value
2014-01-01 00:00:00              NaN
2014-01-01 00:00:00.946000 -0.322091
2014-01-01 00:00:01              NaN
2014-01-01 00:00:01.127000  0.887412
2014-01-01 00:00:01.690000  0.537789

[5 rows x 1 columns]

In [44]: df.set_index('time').reindex(new_range).ffill().head(10)
Out[44]: 
                               value
2014-01-01 00:00:00              NaN
2014-01-01 00:00:00.946000 -0.322091
2014-01-01 00:00:01        -0.322091
2014-01-01 00:00:01.127000  0.887412
2014-01-01 00:00:01.690000  0.537789
2014-01-01 00:00:02         0.537789
2014-01-01 00:00:02.332000  0.311556
2014-01-01 00:00:02.335000  0.273509
2014-01-01 00:00:03         0.273509
2014-01-01 00:00:03.245000 -1.034595

[10 rows x 1 columns]

From the provided csv file (which FYI is named 'stocksA.csv') (and you don't need to do df=DataFrame(df) as its already a frame (nor do you need to specify the dtype)

You have duplicates on the Time column

In [34]: df.drop_duplicates(['Time']).set_index('Time').reindex(new_range).info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 49354 entries, 2011-01-10 09:29:59.999400 to 2011-01-10 16:00:00
Data columns (total 2 columns):
Timestamp    25954 non-null float64
Spread       25954 non-null float64
dtypes: float64(2)

In [35]: df.drop_duplicates(['Time']).set_index('Time').reindex(new_range).ffill().info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 49354 entries, 2011-01-10 09:29:59.999400 to 2011-01-10 16:00:00
Data columns (total 2 columns):
Timestamp    49354 non-null float64
Spread       49354 non-null float64
dtypes: float64(2)

In [36]: df.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 45782 entries, 0 to 45781
Data columns (total 3 columns):
Timestamp    45782 non-null float64
Spread       45782 non-null int64
Time         45782 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(1), int64(1)

In [37]: df.drop_duplicates(['Time','Spread']).info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 26171 entries, 0 to 45780
Data columns (total 3 columns):
Timestamp    26171 non-null float64
Spread       26171 non-null int64
Time         26171 non-null datetime64[ns]
dtypes: datetime64[ns](1), float64(1), int64(1)

So prob easiest to simply drop them and reindex to the new times you want. If you WANT to preserver Time/spread duplicates then this becomes a much more complicated problem. You will have to either use a multi-index and loop on the duplicates or better yet just resample the data down (e.g. say mean or something).

Here is how to deal with your duplicate data; groupby it by the duplicated column and perform an operation (here mean). You should do this before the reindexing step.

In [13]: df.groupby('Time')['Spread'].mean()
Out[13]: 
Time
2011-01-10 09:29:59.999400       2800
2011-01-10 09:30:00.000940       3800
2011-01-10 09:30:00.010130       1100
2011-01-10 09:30:00.018500       1100
2011-01-10 09:30:00.020060       1100
2011-01-10 09:30:00.020980       1100
2011-01-10 09:30:00.024570        100
2011-01-10 09:30:00.024769999     100
2011-01-10 09:30:00.028210       1100
2011-01-10 09:30:00.037950       1100
2011-01-10 09:30:00.038880       1100
2011-01-10 09:30:00.039140       1100
2011-01-10 09:30:00.040410       1100
2011-01-10 09:30:00.041510        100
2011-01-10 09:30:00.042530        100
...
2011-01-10 09:40:32.850540       300
2011-01-10 09:40:32.862300       300
2011-01-10 09:40:32.937410       300
2011-01-10 09:40:33.001750       300
2011-01-10 09:40:33.129500       300
2011-01-10 09:40:33.129650       300
2011-01-10 09:40:33.131560       300
2011-01-10 09:40:33.136100       200
2011-01-10 09:40:33.136310       200
2011-01-10 09:40:33.136560       200
2011-01-10 09:40:33.137590       200
2011-01-10 09:40:33.137640       200
2011-01-10 09:40:33.137850       200
2011-01-10 09:40:33.138840       200
2011-01-10 09:40:33.154219999    200
Name: Spread, Length: 25954

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4