public class ChunkOutputStream extends OutputStream
OutputStream
which will write to a row. The written data
will be split up by chunks of the given chunkSize. Each chunk we get written
to own column which will have the chunk number (starting at 0) as column key
(Long).
This implementation is not thread-safe!
Based on Hector implementation for Cassandra.
https://github.com/rantav/hector/blob/master/core/src/main/java/me/prettyprint/cassandra/io/ChunkOutputStream.javaConstructor and Description |
---|
ChunkOutputStream(org.apache.hadoop.conf.Configuration conf,
byte[] tableName,
byte[] cf,
byte[] key,
int chunkSize)
Creates a special type of
OutputStream that writes data directly to HBase. |
public ChunkOutputStream(org.apache.hadoop.conf.Configuration conf, byte[] tableName, byte[] cf, byte[] key, int chunkSize)
OutputStream
that writes data directly to HBase.conf
- HBase cluster configurationtableName
- name of the table that writes will be madecf
- name of the column family where data is going to be writtenkey
- the row keychunkSize
- the size of each column, in bytes. For HBase, max is 10MBpublic void write(int b) throws IOException
write
in class OutputStream
IOException
public void close() throws IOException
close
in interface Closeable
close
in interface AutoCloseable
close
in class OutputStream
IOException
public void flush() throws IOException
flush
in interface Flushable
flush
in class OutputStream
IOException
Copyright © 2010-2012 The Apache Software Foundation. All Rights Reserved.