• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java RCFileOutputFormat类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hive.ql.io.RCFileOutputFormat的典型用法代码示例。如果您正苦于以下问题:Java RCFileOutputFormat类的具体用法?Java RCFileOutputFormat怎么用?Java RCFileOutputFormat使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



RCFileOutputFormat类属于org.apache.hadoop.hive.ql.io包,在下文中一共展示了RCFileOutputFormat类的11个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: testRCText

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
@Test
public void testRCText()
        throws Exception
{
    List<TestColumn> testColumns = ImmutableList.copyOf(filter(TEST_COLUMNS, testColumn -> {
        return !testColumn.getName().equals("t_struct_null") // TODO: This is a bug in the RC text reader
                && !testColumn.getName().equals("t_map_null_key_complex_key_value"); // RC file does not support complex type as key of a map
    }));

    HiveOutputFormat<?, ?> outputFormat = new RCFileOutputFormat();
    InputFormat<?, ?> inputFormat = new RCFileInputFormat<>();
    @SuppressWarnings("deprecation")
    SerDe serde = new ColumnarSerDe();
    File file = File.createTempFile("presto_test", "rc-text");
    try {
        FileSplit split = createTestFile(file.getAbsolutePath(), outputFormat, serde, null, testColumns, NUM_ROWS);
        testCursorProvider(new ColumnarTextHiveRecordCursorProvider(), split, inputFormat, serde, testColumns, NUM_ROWS);
        testCursorProvider(new GenericHiveRecordCursorProvider(), split, inputFormat, serde, testColumns, NUM_ROWS);
    }
    finally {
        //noinspection ResultOfMethodCallIgnored
        file.delete();
    }
}
 
开发者ID:y-lan,项目名称:presto,代码行数:25,代码来源:TestHiveFileFormats.java


示例2: testRcTextPageSource

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
@Test(enabled = false)
public void testRcTextPageSource()
        throws Exception
{
    HiveOutputFormat<?, ?> outputFormat = new RCFileOutputFormat();
    InputFormat<?, ?> inputFormat = new RCFileInputFormat<>();
    @SuppressWarnings("deprecation")
    SerDe serde = new ColumnarSerDe();
    File file = File.createTempFile("presto_test", "rc-binary");
    file.delete();
    try {
        FileSplit split = createTestFile(file.getAbsolutePath(), outputFormat, serde, null, TEST_COLUMNS, NUM_ROWS);
        testPageSourceFactory(new RcFilePageSourceFactory(TYPE_MANAGER), split, inputFormat, serde, TEST_COLUMNS);
    }
    finally {
        //noinspection ResultOfMethodCallIgnored
        file.delete();
    }
}
 
开发者ID:y-lan,项目名称:presto,代码行数:20,代码来源:TestHiveFileFormats.java


示例3: testRCBinary

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
@Test
public void testRCBinary()
        throws Exception
{
    List<TestColumn> testColumns = ImmutableList.copyOf(filter(TEST_COLUMNS, testColumn -> {
        // RC file does not support complex type as key of a map
        return !testColumn.getName().equals("t_map_null_key_complex_key_value");
    }));

    HiveOutputFormat<?, ?> outputFormat = new RCFileOutputFormat();
    InputFormat<?, ?> inputFormat = new RCFileInputFormat<>();
    @SuppressWarnings("deprecation")
    SerDe serde = new LazyBinaryColumnarSerDe();
    File file = File.createTempFile("presto_test", "rc-binary");
    try {
        FileSplit split = createTestFile(file.getAbsolutePath(), outputFormat, serde, null, testColumns, NUM_ROWS);
        testCursorProvider(new ColumnarBinaryHiveRecordCursorProvider(), split, inputFormat, serde, testColumns, NUM_ROWS);
        testCursorProvider(new GenericHiveRecordCursorProvider(), split, inputFormat, serde, testColumns, NUM_ROWS);
    }
    finally {
        //noinspection ResultOfMethodCallIgnored
        file.delete();
    }
}
 
开发者ID:y-lan,项目名称:presto,代码行数:25,代码来源:TestHiveFileFormats.java


示例4: testRcBinaryPageSource

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
@Test(enabled = false)
public void testRcBinaryPageSource()
        throws Exception
{
    HiveOutputFormat<?, ?> outputFormat = new RCFileOutputFormat();
    InputFormat<?, ?> inputFormat = new RCFileInputFormat<>();
    @SuppressWarnings("deprecation")
    SerDe serde = new LazyBinaryColumnarSerDe();
    File file = File.createTempFile("presto_test", "rc-binary");
    file.delete();
    try {
        FileSplit split = createTestFile(file.getAbsolutePath(), outputFormat, serde, null, TEST_COLUMNS, NUM_ROWS);
        testPageSourceFactory(new RcFilePageSourceFactory(TYPE_MANAGER), split, inputFormat, serde, TEST_COLUMNS);
    }
    finally {
        //noinspection ResultOfMethodCallIgnored
        file.delete();
    }
}
 
开发者ID:y-lan,项目名称:presto,代码行数:20,代码来源:TestHiveFileFormats.java


示例5: getStoreType

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
public static String getStoreType(String fileFormat) {
  Preconditions.checkNotNull(fileFormat);

  String[] fileFormatArrary = fileFormat.split("\\.");
  if(fileFormatArrary.length < 1) {
    throw new CatalogException("Hive file output format is wrong. - file output format:" + fileFormat);
  }

  String outputFormatClass = fileFormatArrary[fileFormatArrary.length-1];
  if(outputFormatClass.equals(HiveIgnoreKeyTextOutputFormat.class.getSimpleName())) {
    return CatalogProtos.StoreType.CSV.name();
  } else if(outputFormatClass.equals(RCFileOutputFormat.class.getSimpleName())) {
      return CatalogProtos.StoreType.RCFILE.name();
  } else {
    throw new CatalogException("Not supported file output format. - file output format:" + fileFormat);
  }
}
 
开发者ID:apache,项目名称:incubator-tajo,代码行数:18,代码来源:HCatalogUtil.java


示例6: getStoreType

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
public static String getStoreType(String fileFormat) {
  Preconditions.checkNotNull(fileFormat);

  String[] fileFormatArrary = fileFormat.split("\\.");
  if(fileFormatArrary.length < 1) {
    throw new CatalogException("Hive file output format is wrong. - file output format:" + fileFormat);
  }

  String outputFormatClass = fileFormatArrary[fileFormatArrary.length-1];
  if(outputFormatClass.equals(HiveIgnoreKeyTextOutputFormat.class.getSimpleName())) {
    return CatalogProtos.StoreType.CSV.name();
  } else if(outputFormatClass.equals(HiveSequenceFileOutputFormat.class.getSimpleName())) {
    return CatalogProtos.StoreType.SEQUENCEFILE.name();
  } else if(outputFormatClass.equals(RCFileOutputFormat.class.getSimpleName())) {
      return CatalogProtos.StoreType.RCFILE.name();
  } else {
    throw new CatalogException("Not supported file output format. - file output format:" + fileFormat);
  }
}
 
开发者ID:gruter,项目名称:tajo-cdh,代码行数:20,代码来源:HCatalogUtil.java


示例7: setStoreLocation

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
@Override
public void setStoreLocation(String location, Job job) throws IOException {
    super.setStoreLocation(location, job);
    // set number of columns if this is set in context.
    Properties p = getUDFProperties();
    if (p != null) {
        numColumns = Integer.parseInt(p.getProperty("numColumns", "-1"));
    }

    if (numColumns > 0) {
        RCFileOutputFormat.setColumnNumber(job.getConfiguration(), numColumns);
    }
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:14,代码来源:HiveColumnarStorage.java


示例8: writeRCFileTest

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
private static int writeRCFileTest(FileSystem fs, int rowCount, Path file, int columnNum,
        CompressionCodec codec, int columnCount) throws IOException {
    fs.delete(file, true);
    int rowsWritten = 0;

    resetRandomGenerators();

    RCFileOutputFormat.setColumnNumber(conf, columnNum);
    RCFile.Writer writer = new RCFile.Writer(fs, conf, file, null, codec);

    byte[][] columnRandom;

    BytesRefArrayWritable bytes = new BytesRefArrayWritable(columnNum);
    columnRandom = new byte[columnNum][];
    for (int i = 0; i < columnNum; i++) {
        BytesRefWritable cu = new BytesRefWritable();
        bytes.set(i, cu);
    }

    for (int i = 0; i < rowCount; i++) {
        nextRandomRow(columnRandom, bytes, columnCount);
        rowsWritten++;
        writer.append(bytes);
    }
    writer.close();

    return rowsWritten;
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:29,代码来源:TestHiveColumnarLoader.java


示例9: writeRCFileTest

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
private static int writeRCFileTest(FileSystem fs, int rowCount, Path file, int columnNum,
        CompressionCodec codec, int columnCount) throws IOException {
    fs.delete(file, true);
    int rowsWritten = 0;


    RCFileOutputFormat.setColumnNumber(conf, columnNum);
    RCFile.Writer writer = new RCFile.Writer(fs, conf, file, null, codec);

    byte[][] columnRandom;

    BytesRefArrayWritable bytes = new BytesRefArrayWritable(columnNum);
    columnRandom = new byte[columnNum][];
    for (int i = 0; i < columnNum; i++) {
        BytesRefWritable cu = new BytesRefWritable();
        bytes.set(i, cu);
    }

    for (int i = 0; i < rowCount; i++) {

        bytes.resetValid(columnRandom.length);
        for (int j = 0; j < columnRandom.length; j++) {
            columnRandom[j]= "Sample value".getBytes();
            bytes.get(j).set(columnRandom[j], 0, columnRandom[j].length);
        }
        rowsWritten++;
        writer.append(bytes);
    }
    writer.close();

    return rowsWritten;
}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:33,代码来源:TestHiveColumnarStorage.java


示例10: writeRCFileTest

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
private static int writeRCFileTest(FileSystem fs, int rowCount, Path file,
    int columnNum, CompressionCodec codec, int columnCount)
    throws IOException {
fs.delete(file, true);
int rowsWritten = 0;

resetRandomGenerators();

RCFileOutputFormat.setColumnNumber(conf, columnNum);
RCFile.Writer writer = new RCFile.Writer(fs, conf, file, null, codec);

byte[][] columnRandom;

BytesRefArrayWritable bytes = new BytesRefArrayWritable(columnNum);
columnRandom = new byte[columnNum][];
for (int i = 0; i < columnNum; i++) {
    BytesRefWritable cu = new BytesRefWritable();
    bytes.set(i, cu);
}

for (int i = 0; i < rowCount; i++) {
    nextRandomRow(columnRandom, bytes, columnCount);
    rowsWritten++;
    writer.append(bytes);
}
writer.close();

return rowsWritten;
   }
 
开发者ID:kaituo,项目名称:sedge,代码行数:30,代码来源:TestHiveColumnarLoader.java


示例11: writeTestData

import org.apache.hadoop.hive.ql.io.RCFileOutputFormat; //导入依赖的package包/类
@Override
public void writeTestData(File file, int recordCounts, int columnCount,
        String colSeparator) throws IOException {

    // write random test data

    Configuration conf = new Configuration();
    FileSystem fs = FileSystem.getLocal(conf);

    RCFileOutputFormat.setColumnNumber(conf, columnCount);
    RCFile.Writer writer = new RCFile.Writer(fs, conf, new Path(
            file.getAbsolutePath()));

    BytesRefArrayWritable bytes = new BytesRefArrayWritable(columnCount);

    for (int c = 0; c < columnCount; c++) {
        bytes.set(c, new BytesRefWritable());
    }

    try {

        for (int r = 0; r < recordCounts; r++) {
            // foreach row write n columns

            for (int c = 0; c < columnCount; c++) {

                byte[] stringbytes = String.valueOf(Math.random())
                        .getBytes();
                bytes.get(c).set(stringbytes, 0, stringbytes.length);

            }

            writer.append(bytes);

        }

    } finally {
        writer.close();
    }

}
 
开发者ID:sigmoidanalytics,项目名称:spork-streaming,代码行数:42,代码来源:TestAllLoader.java



注:本文中的org.apache.hadoop.hive.ql.io.RCFileOutputFormat类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java MalformedCachePatternException类代码示例发布时间:2022-05-23
下一篇:
Java MockFlowExecutionKey类代码示例发布时间:2022-05-23
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap