A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://hadoop.apache.org/docs/r1.2.1/api/org/apache/hadoop/io/Writable.html below:

Writable (Hadoop 1.2.1 API)

org.apache.hadoop.io
Interface Writable
All Known Subinterfaces:
InputSplit, WritableComparable<T>
All Known Implementing Classes:
AbstractDelegationTokenIdentifier, AbstractMapWritable, AccessControlList, ArrayWritable, BlockLocation, BloomFilter, BooleanWritable, BytesWritable, ByteWritable, ClusterMetrics, ClusterStatus, CombineFileSplit, CombineFileSplit, CompositeInputSplit, CompressedWritable, Configuration, ContentSummary, Counter, CounterGroup, Counters, Counters, Counters.Counter, Counters.Group, CountingBloomFilter, Credentials, DataDrivenDBInputFormat.DataDrivenDBInputSplit, DBInputFormat.DBInputSplit, DBInputFormat.DBInputSplit, DBInputFormat.NullDBWritable, DBInputFormat.NullDBWritable, DelegationKey, DelegationTokenIdentifier, DocumentAndOp, DocumentID, DoubleWritable, DynamicBloomFilter, FileChecksum, FileSplit, FileSplit, FileStatus, Filter, FloatWritable, FsPermission, GenericWritable, ID, ID, IntermediateForm, IntWritable, JobConf, JobID, JobID, JobProfile, JobQueueInfo, JobSplit.SplitMetaInfo, JobStatus, JobTokenIdentifier, JvmTask, Key, LineDocTextAndOp, LongWritable, MapTaskCompletionEventsUpdate, MapWritable, MD5Hash, MD5MD5CRC32FileChecksum, MultiFileSplit, MultiFileWordCount.WordOffset, NullWritable, ObjectWritable, PermissionStatus, QueueAclsInfo, Record, RecordTypeInfo, RetouchedBloomFilter, SecondarySort.IntPair, SequenceFile.Metadata, Shard, SleepJob.EmptySplit, SortedMapWritable, TaggedMapOutput, Task, TaskAttemptID, TaskAttemptID, TaskCompletionEvent, TaskID, TaskID, TaskReport, TaskStatus, TaskTrackerStatus, Text, Token, TokenIdentifier, TupleWritable, TwoDArrayWritable, TypedBytesWritable, UTF8, VersionedWritable, VIntWritable, VLongWritable
public interface Writable

A serializable object which implements a simple, efficient, serialization protocol, based on DataInput and DataOutput.

Any key or value type in the Hadoop Map-Reduce framework implements this interface.

Implementations typically implement a static read(DataInput) method which constructs a new instance, calls readFields(DataInput) and returns the instance.

Example:

     public class MyWritable implements Writable {
       // Some data     
       private int counter;
       private long timestamp;
       
       public void write(DataOutput out) throws IOException {
         out.writeInt(counter);
         out.writeLong(timestamp);
       }
       
       public void readFields(DataInput in) throws IOException {
         counter = in.readInt();
         timestamp = in.readLong();
       }
       
       public static MyWritable read(DataInput in) throws IOException {
         MyWritable w = new MyWritable();
         w.readFields(in);
         return w;
       }
     }
 
  write
void write(DataOutput out)
           throws IOException
Serialize the fields of this object to out.
Parameters:
out - DataOuput to serialize this object into.
Throws:
IOException
readFields
void readFields(DataInput in)
                throws IOException
Deserialize the fields of this object from in.

For efficiency, implementations should attempt to re-use storage in the existing object where possible.

Parameters:
in - DataInput to deseriablize this object from.
Throws:
IOException
Copyright © 2009 The Apache Software Foundation

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4