• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java InvalidHFileException类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.hbase.io.hfile.InvalidHFileException的典型用法代码示例。如果您正苦于以下问题:Java InvalidHFileException类的具体用法?Java InvalidHFileException怎么用?Java InvalidHFileException使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



InvalidHFileException类属于org.apache.hadoop.hbase.io.hfile包,在下文中一共展示了InvalidHFileException类的9个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
@Override public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader = null;
  try {
    LOG.info(
        "Validating hfile at " + srcPath + " for inclusion in " + "store " + this + " region "
            + this.getRegionInfo().getRegionNameAsString());
    reader = HFile.createReader(srcPath.getFileSystem(conf), srcPath, cacheConf, conf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    Preconditions.checkState(firstKey != null, "First key can not be null");
    byte[] lk = reader.getLastKey();
    Preconditions.checkState(lk != null, "Last key can not be null");
    byte[] lastKey = KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) + " last=" + Bytes
        .toStringBinary(lastKey));
    LOG.debug(
        "Region bounds: first=" + Bytes.toStringBinary(getRegionInfo().getStartKey()) + " last="
            + Bytes.toStringBinary(getRegionInfo().getEndKey()));

    if (!this.getRegionInfo().containsRange(firstKey, lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region " + this
              .getRegionInfo().getRegionNameAsString());
    }

    if (reader.length() > conf
        .getLong(HConstants.HREGION_MAX_FILESIZE, HConstants.DEFAULT_MAX_FILE_SIZE)) {
      LOG.warn(
          "Trying to bulk load hfile " + srcPath.toString() + " with size: " + reader.length()
              + " bytes can be problematic as it may lead to oversplitting.");
    }

    if (verifyBulkLoads) {
      long verificationStartTime = EnvironmentEdgeManager.currentTime();
      LOG.info("Full verification started for bulk load hfile: " + srcPath.toString());
      Cell prevCell = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        Cell cell = scanner.getKeyValue();
        if (prevCell != null) {
          if (CellComparator.compareRows(prevCell, cell) > 0) {
            throw new InvalidHFileException(
                "Previous row is greater than" + " current row: path=" + srcPath + " previous="
                    + CellUtil.getCellKeyAsString(prevCell) + " current=" + CellUtil
                    .getCellKeyAsString(cell));
          }
          if (CellComparator.compareFamilies(prevCell, cell) != 0) {
            throw new InvalidHFileException(
                "Previous key had different" + " family compared to current key: path=" + srcPath
                    + " previous=" + Bytes
                    .toStringBinary(prevCell.getFamilyArray(), prevCell.getFamilyOffset(),
                        prevCell.getFamilyLength()) + " current=" + Bytes
                    .toStringBinary(cell.getFamilyArray(), cell.getFamilyOffset(),
                        cell.getFamilyLength()));
          }
        }
        prevCell = cell;
      } while (scanner.next());
      LOG.info(
          "Full verification complete for bulk load hfile: " + srcPath.toString() + " took " + (
              EnvironmentEdgeManager.currentTime() - verificationStartTime) + " ms");
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:fengchen8086,项目名称:ditb,代码行数:70,代码来源:HStore.java


示例2: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
/**
 * This throws a WrongRegionException if the HFile does not fit in this region, or an
 * InvalidHFileException if the HFile is not valid.
 */
void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in " + "store " + this
        + " region " + this.region);
    reader = HFile.createReader(srcPath.getFileSystem(conf), srcPath, cacheConf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    byte[] lk = reader.getLastKey();
    byte[] lastKey = (lk == null) ? null : KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) + " last="
        + Bytes.toStringBinary(lastKey));
    LOG.debug("Region bounds: first=" + Bytes.toStringBinary(region.getStartKey()) + " last="
        + Bytes.toStringBinary(region.getEndKey()));

    HRegionInfo hri = region.getRegionInfo();
    if (!hri.containsRange(firstKey, lastKey)) {
      throw new WrongRegionException("Bulk load file " + srcPath.toString()
          + " does not fit inside region " + this.region);
    }

    if (verifyBulkLoads) {
      KeyValue prevKV = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        KeyValue kv = scanner.getKeyValue();
        if (prevKV != null) {
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getRowOffset(), prevKV.getRowLength(),
            kv.getBuffer(), kv.getRowOffset(), kv.getRowLength()) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + Bytes.toStringBinary(prevKV.getKey()) + " current="
                + Bytes.toStringBinary(kv.getKey()));
          }
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getFamilyOffset(),
            prevKV.getFamilyLength(), kv.getBuffer(), kv.getFamilyOffset(), kv.getFamilyLength()) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath + " previous="
                + Bytes.toStringBinary(prevKV.getFamily()) + " current="
                + Bytes.toStringBinary(kv.getFamily()));
          }
        }
        prevKV = kv;
      } while (scanner.next());
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:fengchen8086,项目名称:LCIndex-HBase-0.94.16,代码行数:57,代码来源:Store.java


示例3: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
@Override
public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
    HFile.Reader reader = null;
    try {
        LOG.info("Validating hfile at " + srcPath + " for inclusion in "
                + "store " + this + " region " + this.getRegionInfo().getRegionNameAsString());
        reader = HFile.createReader(srcPath.getFileSystem(conf),
                srcPath, cacheConf, conf);
        reader.loadFileInfo();

        byte[] firstKey = reader.getFirstRowKey();
        Preconditions.checkState(firstKey != null, "First key can not be null");
        byte[] lk = reader.getLastKey();
        Preconditions.checkState(lk != null, "Last key can not be null");
        byte[] lastKey = KeyValue.createKeyValueFromKey(lk).getRow();

        LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
                " last=" + Bytes.toStringBinary(lastKey));
        LOG.debug("Region bounds: first=" +
                Bytes.toStringBinary(getRegionInfo().getStartKey()) +
                " last=" + Bytes.toStringBinary(getRegionInfo().getEndKey()));

        if (!this.getRegionInfo().containsRange(firstKey, lastKey)) {
            throw new WrongRegionException(
                    "Bulk load file " + srcPath.toString() + " does not fit inside region "
                            + this.getRegionInfo().getRegionNameAsString());
        }

        if (reader.length() > conf.getLong(HConstants.HREGION_MAX_FILESIZE,
                HConstants.DEFAULT_MAX_FILE_SIZE)) {
            LOG.warn("Trying to bulk load hfile " + srcPath.toString() + " with size: " +
                    reader.length() + " bytes can be problematic as it may lead to oversplitting.");
        }

        if (verifyBulkLoads) {
            long verificationStartTime = EnvironmentEdgeManager.currentTime();
            LOG.info("Full verification started for bulk load hfile: " + srcPath.toString());
            Cell prevCell = null;
            HFileScanner scanner = reader.getScanner(false, false, false);
            scanner.seekTo();
            do {
                Cell cell = scanner.getKeyValue();
                if (prevCell != null) {
                    if (CellComparator.compareRows(prevCell, cell) > 0) {
                        throw new InvalidHFileException("Previous row is greater than"
                                + " current row: path=" + srcPath + " previous="
                                + CellUtil.getCellKeyAsString(prevCell) + " current="
                                + CellUtil.getCellKeyAsString(cell));
                    }
                    if (CellComparator.compareFamilies(prevCell, cell) != 0) {
                        throw new InvalidHFileException("Previous key had different"
                                + " family compared to current key: path=" + srcPath
                                + " previous="
                                + Bytes.toStringBinary(prevCell.getFamilyArray(), prevCell.getFamilyOffset(),
                                prevCell.getFamilyLength())
                                + " current="
                                + Bytes.toStringBinary(cell.getFamilyArray(), cell.getFamilyOffset(),
                                cell.getFamilyLength()));
                    }
                }
                prevCell = cell;
            } while (scanner.next());
            LOG.info("Full verification complete for bulk load hfile: " + srcPath.toString()
                    + " took " + (EnvironmentEdgeManager.currentTime() - verificationStartTime)
                    + " ms");
        }
    } finally {
        if (reader != null) reader.close();
    }
}
 
开发者ID:grokcoder,项目名称:pbase,代码行数:71,代码来源:HStore.java


示例4: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
@Override
public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader  = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in "
        + "store " + this + " region " + this.getRegionInfo().getRegionNameAsString());
    reader = HFile.createReader(srcPath.getFileSystem(conf),
        srcPath, cacheConf, conf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    Preconditions.checkState(firstKey != null, "First key can not be null");
    byte[] lk = reader.getLastKey();
    Preconditions.checkState(lk != null, "Last key can not be null");
    byte[] lastKey =  KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
        " last=" + Bytes.toStringBinary(lastKey));
    LOG.debug("Region bounds: first=" +
        Bytes.toStringBinary(getRegionInfo().getStartKey()) +
        " last=" + Bytes.toStringBinary(getRegionInfo().getEndKey()));

    if (!this.getRegionInfo().containsRange(firstKey, lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region "
          + this.getRegionInfo().getRegionNameAsString());
    }

    if (verifyBulkLoads) {
      KeyValue prevKV = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        KeyValue kv = scanner.getKeyValue();
        if (prevKV != null) {
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getRowOffset(),
              prevKV.getRowLength(), kv.getBuffer(), kv.getRowOffset(),
              kv.getRowLength()) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + Bytes.toStringBinary(prevKV.getKey()) + " current="
                + Bytes.toStringBinary(kv.getKey()));
          }
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getFamilyOffset(),
              prevKV.getFamilyLength(), kv.getBuffer(), kv.getFamilyOffset(),
              kv.getFamilyLength()) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath
                + " previous=" + Bytes.toStringBinary(prevKV.getFamily())
                + " current=" + Bytes.toStringBinary(kv.getFamily()));
          }
        }
        prevKV = kv;
      } while (scanner.next());
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:tenggyut,项目名称:HIndex,代码行数:60,代码来源:HStore.java


示例5: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
/**
 * This throws a WrongRegionException if the HFile does not fit in this
 * region, or an InvalidHFileException if the HFile is not valid.
 */
void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader  = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in "
        + "store " + this + " region " + this.region);
    reader = HFile.createReader(srcPath.getFileSystem(conf),
        srcPath, cacheConf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    byte[] lk = reader.getLastKey();
    byte[] lastKey =
        (lk == null) ? null :
            KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
        " last=" + Bytes.toStringBinary(lastKey));
    LOG.debug("Region bounds: first=" +
        Bytes.toStringBinary(region.getStartKey()) +
        " last=" + Bytes.toStringBinary(region.getEndKey()));

    HRegionInfo hri = region.getRegionInfo();
    if (!hri.containsRange(firstKey, lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region "
          + this.region);
    }

    if (verifyBulkLoads) {
      KeyValue prevKV = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        KeyValue kv = scanner.getKeyValue();
        if (prevKV != null) {
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getRowOffset(),
              prevKV.getRowLength(), kv.getBuffer(), kv.getRowOffset(),
              kv.getRowLength()) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + Bytes.toStringBinary(prevKV.getKey()) + " current="
                + Bytes.toStringBinary(kv.getKey()));
          }
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getFamilyOffset(),
              prevKV.getFamilyLength(), kv.getBuffer(), kv.getFamilyOffset(),
              kv.getFamilyLength()) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath
                + " previous=" + Bytes.toStringBinary(prevKV.getFamily())
                + " current=" + Bytes.toStringBinary(kv.getFamily()));
          }
        }
        prevKV = kv;
      } while (scanner.next());
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:wanhao,项目名称:IRIndex,代码行数:64,代码来源:Store.java


示例6: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
/**
 * This throws a WrongRegionException if the HFile does not fit in this region, or an
 * InvalidHFileException if the HFile is not valid.
 */
public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader  = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in "
        + "store " + this + " region " + this.getRegionInfo().getRegionNameAsString());
    reader = HFile.createReader(srcPath.getFileSystem(conf), srcPath, cacheConf,
      isPrimaryReplicaStore(), conf);
    reader.loadFileInfo();

    Optional<byte[]> firstKey = reader.getFirstRowKey();
    Preconditions.checkState(firstKey.isPresent(), "First key can not be null");
    Optional<Cell> lk = reader.getLastKey();
    Preconditions.checkState(lk.isPresent(), "Last key can not be null");
    byte[] lastKey =  CellUtil.cloneRow(lk.get());

    if (LOG.isDebugEnabled()) {
      LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey.get()) +
          " last=" + Bytes.toStringBinary(lastKey));
      LOG.debug("Region bounds: first=" +
          Bytes.toStringBinary(getRegionInfo().getStartKey()) +
          " last=" + Bytes.toStringBinary(getRegionInfo().getEndKey()));
    }

    if (!this.getRegionInfo().containsRange(firstKey.get(), lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region "
          + this.getRegionInfo().getRegionNameAsString());
    }

    if(reader.length() > conf.getLong(HConstants.HREGION_MAX_FILESIZE,
        HConstants.DEFAULT_MAX_FILE_SIZE)) {
      LOG.warn("Trying to bulk load hfile " + srcPath + " with size: " +
          reader.length() + " bytes can be problematic as it may lead to oversplitting.");
    }

    if (verifyBulkLoads) {
      long verificationStartTime = EnvironmentEdgeManager.currentTime();
      LOG.info("Full verification started for bulk load hfile: {}", srcPath);
      Cell prevCell = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        Cell cell = scanner.getCell();
        if (prevCell != null) {
          if (comparator.compareRows(prevCell, cell) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + CellUtil.getCellKeyAsString(prevCell) + " current="
                + CellUtil.getCellKeyAsString(cell));
          }
          if (CellComparator.getInstance().compareFamilies(prevCell, cell) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath
                + " previous="
                + Bytes.toStringBinary(prevCell.getFamilyArray(), prevCell.getFamilyOffset(),
                    prevCell.getFamilyLength())
                + " current="
                + Bytes.toStringBinary(cell.getFamilyArray(), cell.getFamilyOffset(),
                    cell.getFamilyLength()));
          }
        }
        prevCell = cell;
      } while (scanner.next());
    LOG.info("Full verification complete for bulk load hfile: " + srcPath.toString()
       + " took " + (EnvironmentEdgeManager.currentTime() - verificationStartTime)
       + " ms");
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:apache,项目名称:hbase,代码行数:76,代码来源:HStore.java


示例7: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
@Override
public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader  = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in "
        + "store " + this + " region " + this.getRegionInfo().getRegionNameAsString());
    reader = HFile.createReader(srcPath.getFileSystem(conf),
        srcPath, cacheConf, conf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    Preconditions.checkState(firstKey != null, "First key can not be null");
    byte[] lk = reader.getLastKey();
    Preconditions.checkState(lk != null, "Last key can not be null");
    byte[] lastKey =  KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
        " last=" + Bytes.toStringBinary(lastKey));
    LOG.debug("Region bounds: first=" +
        Bytes.toStringBinary(getRegionInfo().getStartKey()) +
        " last=" + Bytes.toStringBinary(getRegionInfo().getEndKey()));

    if (!this.getRegionInfo().containsRange(firstKey, lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region "
          + this.getRegionInfo().getRegionNameAsString());
    }

    if (verifyBulkLoads) {
      Cell prevKV = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        Cell kv = scanner.getKeyValue();
        if (prevKV != null) {
          if (Bytes.compareTo(prevKV.getRowArray(), prevKV.getRowOffset(),
              prevKV.getRowLength(), kv.getRowArray(), kv.getRowOffset(),
              kv.getRowLength()) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + Bytes.toStringBinary(KeyValueUtil.ensureKeyValue(prevKV).getKey()) + " current="
                + Bytes.toStringBinary(KeyValueUtil.ensureKeyValue(kv).getKey()));
          }
          if (Bytes.compareTo(prevKV.getFamilyArray(), prevKV.getFamilyOffset(),
              prevKV.getFamilyLength(), kv.getFamilyArray(), kv.getFamilyOffset(),
              kv.getFamilyLength()) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath
                + " previous=" + Bytes.toStringBinary(prevKV.getFamily())
                + " current=" + Bytes.toStringBinary(kv.getFamily()));
          }
        }
        prevKV = kv;
      } while (scanner.next());
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:shenli-uiuc,项目名称:PyroDB,代码行数:60,代码来源:HStore.java


示例8: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
@Override
public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader  = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in "
        + "store " + this + " region " + this.getRegionInfo().getRegionNameAsString());
    reader = HFile.createReader(srcPath.getFileSystem(conf),
        srcPath, cacheConf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    Preconditions.checkState(firstKey != null, "First key can not be null");
    byte[] lk = reader.getLastKey();
    Preconditions.checkState(lk != null, "Last key can not be null");
    byte[] lastKey =  KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
        " last=" + Bytes.toStringBinary(lastKey));
    LOG.debug("Region bounds: first=" +
        Bytes.toStringBinary(getRegionInfo().getStartKey()) +
        " last=" + Bytes.toStringBinary(getRegionInfo().getEndKey()));

    if (!this.getRegionInfo().containsRange(firstKey, lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region "
          + this.getRegionInfo().getRegionNameAsString());
    }

    if (verifyBulkLoads) {
      KeyValue prevKV = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        KeyValue kv = scanner.getKeyValue();
        if (prevKV != null) {
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getRowOffset(),
              prevKV.getRowLength(), kv.getBuffer(), kv.getRowOffset(),
              kv.getRowLength()) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + Bytes.toStringBinary(prevKV.getKey()) + " current="
                + Bytes.toStringBinary(kv.getKey()));
          }
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getFamilyOffset(),
              prevKV.getFamilyLength(), kv.getBuffer(), kv.getFamilyOffset(),
              kv.getFamilyLength()) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath
                + " previous=" + Bytes.toStringBinary(prevKV.getFamily())
                + " current=" + Bytes.toStringBinary(kv.getFamily()));
          }
        }
        prevKV = kv;
      } while (scanner.next());
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:cloud-software-foundation,项目名称:c5,代码行数:60,代码来源:HStore.java


示例9: assertBulkLoadHFileOk

import org.apache.hadoop.hbase.io.hfile.InvalidHFileException; //导入依赖的package包/类
@Override
public void assertBulkLoadHFileOk(Path srcPath) throws IOException {
  HFile.Reader reader  = null;
  try {
    LOG.info("Validating hfile at " + srcPath + " for inclusion in "
        + "store " + this + " region " + this.region);
    reader = HFile.createReader(srcPath.getFileSystem(conf),
        srcPath, cacheConf);
    reader.loadFileInfo();

    byte[] firstKey = reader.getFirstRowKey();
    Preconditions.checkState(firstKey != null, "First key can not be null");
    byte[] lk = reader.getLastKey();
    Preconditions.checkState(lk != null, "Last key can not be null");
    byte[] lastKey =  KeyValue.createKeyValueFromKey(lk).getRow();

    LOG.debug("HFile bounds: first=" + Bytes.toStringBinary(firstKey) +
        " last=" + Bytes.toStringBinary(lastKey));
    LOG.debug("Region bounds: first=" +
        Bytes.toStringBinary(region.getStartKey()) +
        " last=" + Bytes.toStringBinary(region.getEndKey()));

    HRegionInfo hri = region.getRegionInfo();
    if (!hri.containsRange(firstKey, lastKey)) {
      throw new WrongRegionException(
          "Bulk load file " + srcPath.toString() + " does not fit inside region "
          + this.region);
    }

    if (verifyBulkLoads) {
      KeyValue prevKV = null;
      HFileScanner scanner = reader.getScanner(false, false, false);
      scanner.seekTo();
      do {
        KeyValue kv = scanner.getKeyValue();
        if (prevKV != null) {
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getRowOffset(),
              prevKV.getRowLength(), kv.getBuffer(), kv.getRowOffset(),
              kv.getRowLength()) > 0) {
            throw new InvalidHFileException("Previous row is greater than"
                + " current row: path=" + srcPath + " previous="
                + Bytes.toStringBinary(prevKV.getKey()) + " current="
                + Bytes.toStringBinary(kv.getKey()));
          }
          if (Bytes.compareTo(prevKV.getBuffer(), prevKV.getFamilyOffset(),
              prevKV.getFamilyLength(), kv.getBuffer(), kv.getFamilyOffset(),
              kv.getFamilyLength()) != 0) {
            throw new InvalidHFileException("Previous key had different"
                + " family compared to current key: path=" + srcPath
                + " previous=" + Bytes.toStringBinary(prevKV.getFamily())
                + " current=" + Bytes.toStringBinary(kv.getFamily()));
          }
        }
        prevKV = kv;
      } while (scanner.next());
    }
  } finally {
    if (reader != null) reader.close();
  }
}
 
开发者ID:daidong,项目名称:DominoHBase,代码行数:61,代码来源:HStore.java



注:本文中的org.apache.hadoop.hbase.io.hfile.InvalidHFileException类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java ApplicationNotFoundException类代码示例发布时间:2022-05-21
下一篇:
Java InstanceStatus类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap