• 设为首页
  • 点击收藏
  • 手机版
    手机扫一扫访问
    迪恩网络手机版
  • 关注官方公众号
    微信扫一扫关注
    迪恩网络公众号

Java MRConfig类代码示例

原作者: [db:作者] 来自: [db:来源] 收藏 邀请

本文整理汇总了Java中org.apache.hadoop.mapreduce.MRConfig的典型用法代码示例。如果您正苦于以下问题:Java MRConfig类的具体用法?Java MRConfig怎么用?Java MRConfig使用的例子?那么恭喜您, 这里精选的类代码示例或许可以为您提供帮助。



MRConfig类属于org.apache.hadoop.mapreduce包,在下文中一共展示了MRConfig类的20个代码示例,这些例子默认根据受欢迎程度排序。您可以为喜欢或者感觉有用的代码点赞,您的评价将有助于我们的系统推荐出更棒的Java代码示例。

示例1: initialize

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
public void initialize() {
  Class<? extends ResourceCalculatorProcessTree> clazz =
      PSAgentContext
          .get()
          .getConf()
          .getClass(MRConfig.RESOURCE_CALCULATOR_PROCESS_TREE, null,
              ResourceCalculatorProcessTree.class);
  pTree =
      ResourceCalculatorProcessTree.getResourceCalculatorProcessTree(
          System.getenv().get("JVM_PID"), clazz, PSAgentContext.get().getConf());
  if (pTree != null) {
    pTree.updateProcessTree();
    initCpuCumulativeTime = pTree.getCumulativeCpuTime();
  }
  LOG.info(" Using ResourceCalculatorProcessTree : " + pTree);
}
 
开发者ID:Tencent,项目名称:angel,代码行数:17,代码来源:CounterUpdater.java


示例2: map

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Override
public void map(LongWritable key, Text value, Context context)
    throws IOException {
  StringBuilder sb = new StringBuilder(512);
  for (int i = 0; i < 1000; i++) {
    sb.append("a");
  }
  context.setStatus(sb.toString());
  int progressStatusLength = context.getConfiguration().getInt(
      MRConfig.PROGRESS_STATUS_LEN_LIMIT_KEY,
      MRConfig.PROGRESS_STATUS_LEN_LIMIT_DEFAULT);

  if (context.getStatus().length() > progressStatusLength) {
    throw new IOException("Status is not truncated");
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:TestReporter.java


示例3: checkCompression

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
private void checkCompression(boolean compressMapOutputs,
                              CompressionType redCompression,
                              boolean includeCombine
                              ) throws Exception {
  JobConf conf = new JobConf(TestMapRed.class);
  Path testdir = new Path(TEST_DIR.getAbsolutePath());
  Path inDir = new Path(testdir, "in");
  Path outDir = new Path(testdir, "out");
  FileSystem fs = FileSystem.get(conf);
  fs.delete(testdir, true);
  FileInputFormat.setInputPaths(conf, inDir);
  FileOutputFormat.setOutputPath(conf, outDir);
  conf.setMapperClass(MyMap.class);
  conf.setReducerClass(MyReduce.class);
  conf.setOutputKeyClass(Text.class);
  conf.setOutputValueClass(Text.class);
  conf.setOutputFormat(SequenceFileOutputFormat.class);
  conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.LOCAL_FRAMEWORK_NAME);
  if (includeCombine) {
    conf.setCombinerClass(IdentityReducer.class);
  }
  conf.setCompressMapOutput(compressMapOutputs);
  SequenceFileOutputFormat.setOutputCompressionType(conf, redCompression);
  try {
    if (!fs.mkdirs(testdir)) {
      throw new IOException("Mkdirs failed to create " + testdir.toString());
    }
    if (!fs.mkdirs(inDir)) {
      throw new IOException("Mkdirs failed to create " + inDir.toString());
    }
    Path inFile = new Path(inDir, "part0");
    DataOutputStream f = fs.create(inFile);
    f.writeBytes("Owen was here\n");
    f.writeBytes("Hadoop is fun\n");
    f.writeBytes("Is this done, yet?\n");
    f.close();
    RunningJob rj = JobClient.runJob(conf);
    assertTrue("job was complete", rj.isComplete());
    assertTrue("job was successful", rj.isSuccessful());
    Path output = new Path(outDir,
                           Task.getOutputName(0));
    assertTrue("reduce output exists " + output, fs.exists(output));
    SequenceFile.Reader rdr = 
      new SequenceFile.Reader(fs, output, conf);
    assertEquals("is reduce output compressed " + output, 
                 redCompression != CompressionType.NONE, 
                 rdr.isCompressed());
    rdr.close();
  } finally {
    fs.delete(testdir, true);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:53,代码来源:TestMapRed.java


示例4: setUp

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@BeforeClass
public static void setUp() throws Exception {
  final Configuration conf = new Configuration();
  
  conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.YARN_FRAMEWORK_NAME);
  conf.set(YarnConfiguration.RM_PRINCIPAL, "jt_id/" + SecurityUtil.HOSTNAME_PATTERN + "@APACHE.ORG");
  
  final MiniDFSCluster.Builder builder = new MiniDFSCluster.Builder(conf);
  builder.checkExitOnShutdown(true);
  builder.numDataNodes(numSlaves);
  builder.format(true);
  builder.racks(null);
  dfsCluster = builder.build();
  
  mrCluster = new MiniMRYarnCluster(TestBinaryTokenFile.class.getName(), noOfNMs);
  mrCluster.init(conf);
  mrCluster.start();

  NameNodeAdapter.getDtSecretManager(dfsCluster.getNamesystem()).startThreads(); 
  
  FileSystem fs = dfsCluster.getFileSystem(); 
  p1 = new Path("file1");
  p1 = fs.makeQualified(p1);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:25,代码来源:TestBinaryTokenFile.java


示例5: runTest

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
private static void runTest(String name, Job job) throws Exception {
  job.setNumReduceTasks(1);
  job.getConfiguration().set(MRConfig.FRAMEWORK_NAME, MRConfig.LOCAL_FRAMEWORK_NAME);
  job.getConfiguration().setInt(MRJobConfig.IO_SORT_FACTOR, 1000);
  job.getConfiguration().set("fs.defaultFS", "file:///");
  job.getConfiguration().setInt("test.mapcollection.num.maps", 1);
  job.setInputFormatClass(FakeIF.class);
  job.setOutputFormatClass(NullOutputFormat.class);
  job.setMapperClass(Mapper.class);
  job.setReducerClass(SpillReducer.class);
  job.setMapOutputKeyClass(KeyWritable.class);
  job.setMapOutputValueClass(ValWritable.class);
  job.setSortComparatorClass(VariableComparator.class);

  LOG.info("Running " + name);
  assertTrue("Job failed!", job.waitForCompletion(false));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:18,代码来源:TestMapCollection.java


示例6: setupChildMapredLocalDirs

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
static void setupChildMapredLocalDirs(Task t, JobConf conf) {
  String[] localDirs = conf.getTrimmedStrings(MRConfig.LOCAL_DIR);
  String jobId = t.getJobID().toString();
  String taskId = t.getTaskID().toString();
  boolean isCleanup = t.isTaskCleanupTask();
  String user = t.getUser();
  StringBuffer childMapredLocalDir =
      new StringBuffer(localDirs[0] + Path.SEPARATOR
          + getLocalTaskDir(user, jobId, taskId, isCleanup));
  for (int i = 1; i < localDirs.length; i++) {
    childMapredLocalDir.append("," + localDirs[i] + Path.SEPARATOR
        + getLocalTaskDir(user, jobId, taskId, isCleanup));
  }
  LOG.debug(MRConfig.LOCAL_DIR + " for child : " + childMapredLocalDir);
  conf.set(MRConfig.LOCAL_DIR, childMapredLocalDir.toString());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:LocalJobRunner.java


示例7: testGetClusterStatusWithLocalJobRunner

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test
public void testGetClusterStatusWithLocalJobRunner() throws Exception {
  Configuration conf = new Configuration();
  conf.set(JTConfig.JT_IPC_ADDRESS, MRConfig.LOCAL_FRAMEWORK_NAME);
  conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.LOCAL_FRAMEWORK_NAME);
  JobClient client = new JobClient(conf);
  ClusterStatus clusterStatus = client.getClusterStatus(true);
  Collection<String> activeTrackerNames = clusterStatus
      .getActiveTrackerNames();
  Assert.assertEquals(0, activeTrackerNames.size());
  int blacklistedTrackers = clusterStatus.getBlacklistedTrackers();
  Assert.assertEquals(0, blacklistedTrackers);
  Collection<BlackListInfo> blackListedTrackersInfo = clusterStatus
      .getBlackListedTrackersInfo();
  Assert.assertEquals(0, blackListedTrackersInfo.size());
}
 
开发者ID:naver,项目名称:hadoop,代码行数:17,代码来源:TestJobClient.java


示例8: testSetClasspathWithUserPrecendence

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test (timeout = 120000)
public void testSetClasspathWithUserPrecendence() {
   Configuration conf = new Configuration();
   conf.setBoolean(MRConfig.MAPREDUCE_APP_SUBMISSION_CROSS_PLATFORM, true);
   conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_USER_CLASSPATH_FIRST, true);
   Map<String, String> env = new HashMap<String, String>();
   try {
     MRApps.setClasspath(env, conf);
   } catch (Exception e) {
     fail("Got exception while setting classpath");
   }
   String env_str = env.get("CLASSPATH");
   String expectedClasspath = StringUtils.join(ApplicationConstants.CLASS_PATH_SEPARATOR,
     Arrays.asList(ApplicationConstants.Environment.PWD.$$(), "job.jar/job.jar",
       "job.jar/classes/", "job.jar/lib/*",
       ApplicationConstants.Environment.PWD.$$() + "/*"));
   assertTrue("MAPREDUCE_JOB_USER_CLASSPATH_FIRST set, but not taking effect!",
     env_str.startsWith(expectedClasspath));
 }
 
开发者ID:naver,项目名称:hadoop,代码行数:20,代码来源:TestMRApps.java


示例9: testSetClasspathWithNoUserPrecendence

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test (timeout = 120000)
public void testSetClasspathWithNoUserPrecendence() {
  Configuration conf = new Configuration();
  conf.setBoolean(MRConfig.MAPREDUCE_APP_SUBMISSION_CROSS_PLATFORM, true);
  conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_USER_CLASSPATH_FIRST, false);
  Map<String, String> env = new HashMap<String, String>();
  try {
    MRApps.setClasspath(env, conf);
  } catch (Exception e) {
    fail("Got exception while setting classpath");
  }
  String env_str = env.get("CLASSPATH");
  String expectedClasspath = StringUtils.join(ApplicationConstants.CLASS_PATH_SEPARATOR,
    Arrays.asList("job.jar/job.jar", "job.jar/classes/", "job.jar/lib/*",
      ApplicationConstants.Environment.PWD.$$() + "/*"));
  assertTrue("MAPREDUCE_JOB_USER_CLASSPATH_FIRST false, and job.jar is not in"
    + " the classpath!", env_str.contains(expectedClasspath));
  assertFalse("MAPREDUCE_JOB_USER_CLASSPATH_FIRST false, but taking effect!",
    env_str.startsWith(expectedClasspath));
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestMRApps.java


示例10: testSetClasspathWithJobClassloader

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test (timeout = 120000)
public void testSetClasspathWithJobClassloader() throws IOException {
  Configuration conf = new Configuration();
  conf.setBoolean(MRConfig.MAPREDUCE_APP_SUBMISSION_CROSS_PLATFORM, true);
  conf.setBoolean(MRJobConfig.MAPREDUCE_JOB_CLASSLOADER, true);
  Map<String, String> env = new HashMap<String, String>();
  MRApps.setClasspath(env, conf);
  String cp = env.get("CLASSPATH");
  String appCp = env.get("APP_CLASSPATH");
  assertFalse("MAPREDUCE_JOB_CLASSLOADER true, but job.jar is in the"
    + " classpath!", cp.contains("jar" + ApplicationConstants.CLASS_PATH_SEPARATOR + "job"));
  assertFalse("MAPREDUCE_JOB_CLASSLOADER true, but PWD is in the classpath!",
    cp.contains("PWD"));
  String expectedAppClasspath = StringUtils.join(ApplicationConstants.CLASS_PATH_SEPARATOR,
    Arrays.asList(ApplicationConstants.Environment.PWD.$$(), "job.jar/job.jar",
      "job.jar/classes/", "job.jar/lib/*",
      ApplicationConstants.Environment.PWD.$$() + "/*"));
  assertEquals("MAPREDUCE_JOB_CLASSLOADER true, but job.jar is not in the app"
    + " classpath!", expectedAppClasspath, appCp);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestMRApps.java


示例11: setConf

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
public void setConf(Configuration conf) {
  if (conf instanceof JobConf) {
    this.conf = (JobConf) conf;
  } else {
    this.conf = new JobConf(conf);
  }
  this.mapOutputFile = ReflectionUtils.newInstance(
      conf.getClass(MRConfig.TASK_LOCAL_OUTPUT_CLASS,
        MROutputFiles.class, MapOutputFile.class), conf);
  this.lDirAlloc = new LocalDirAllocator(MRConfig.LOCAL_DIR);
  // add the static resolutions (this is required for the junit to
  // work on testcases that simulate multiple nodes on a single physical
  // node.
  String hostToResolved[] = conf.getStrings(MRConfig.STATIC_RESOLUTIONS);
  if (hostToResolved != null) {
    for (String str : hostToResolved) {
      String name = str.substring(0, str.indexOf('='));
      String resolvedName = str.substring(str.indexOf('=') + 1);
      NetUtils.addStaticResolution(name, resolvedName);
    }
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:Task.java


示例12: IFileInputStream

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
/**
 * Create a checksum input stream that reads
 * @param in The input stream to be verified for checksum.
 * @param len The length of the input stream including checksum bytes.
 */
public IFileInputStream(InputStream in, long len, Configuration conf) {
  this.in = in;
  this.inFd = getFileDescriptorIfAvail(in);
  sum = DataChecksum.newDataChecksum(DataChecksum.Type.CRC32, 
      Integer.MAX_VALUE);
  checksumSize = sum.getChecksumSize();
  length = len;
  dataLength = length - checksumSize;

  conf = (conf != null) ? conf : new Configuration();
  readahead = conf.getBoolean(MRConfig.MAPRED_IFILE_READAHEAD,
      MRConfig.DEFAULT_MAPRED_IFILE_READAHEAD);
  readaheadLength = conf.getInt(MRConfig.MAPRED_IFILE_READAHEAD_BYTES,
      MRConfig.DEFAULT_MAPRED_IFILE_READAHEAD_BYTES);

  doReadahead();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:IFileInputStream.java


示例13: testGetMasterUser

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test 
public void testGetMasterUser() {
  YarnConfiguration conf = new YarnConfiguration();
  conf.set(MRConfig.MASTER_USER_NAME, "foo");
  conf.set(YarnConfiguration.RM_PRINCIPAL, "bar");

  // default is yarn framework  
  assertEquals(Master.getMasterUserName(conf), "bar");

  // set framework name to classic
  conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.CLASSIC_FRAMEWORK_NAME);
  assertEquals(Master.getMasterUserName(conf), "foo");

  // change framework to yarn
  conf.set(MRConfig.FRAMEWORK_NAME, MRConfig.YARN_FRAMEWORK_NAME);
  assertEquals(Master.getMasterUserName(conf), "bar");

}
 
开发者ID:naver,项目名称:hadoop,代码行数:19,代码来源:TestMaster.java


示例14: testClusterAdmins

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test
public void testClusterAdmins() {
  Map<JobACL, AccessControlList> tmpJobACLs = new HashMap<JobACL, AccessControlList>();
  Configuration conf = new Configuration();
  String jobOwner = "testuser";
  conf.set(JobACL.VIEW_JOB.getAclName(), jobOwner);
  conf.set(JobACL.MODIFY_JOB.getAclName(), jobOwner);
  conf.setBoolean(MRConfig.MR_ACLS_ENABLED, true);
  String clusterAdmin = "testuser2";
  conf.set(MRConfig.MR_ADMINS, clusterAdmin);

  JobACLsManager aclsManager = new JobACLsManager(conf);
  tmpJobACLs = aclsManager.constructJobACLs(conf);
  final Map<JobACL, AccessControlList> jobACLs = tmpJobACLs;

  UserGroupInformation callerUGI = UserGroupInformation.createUserForTesting(
      clusterAdmin, new String[] {});

  // cluster admin should have access
  boolean val = aclsManager.checkAccess(callerUGI, JobACL.VIEW_JOB, jobOwner,
      jobACLs.get(JobACL.VIEW_JOB));
  assertTrue("cluster admin should have view access", val);
  val = aclsManager.checkAccess(callerUGI, JobACL.MODIFY_JOB, jobOwner,
      jobACLs.get(JobACL.MODIFY_JOB));
  assertTrue("cluster admin should have modify access", val);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:27,代码来源:TestJobAclsManager.java


示例15: testAclsOff

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test
public void testAclsOff() {
  Map<JobACL, AccessControlList> tmpJobACLs = new HashMap<JobACL, AccessControlList>();
  Configuration conf = new Configuration();
  String jobOwner = "testuser";
  conf.set(JobACL.VIEW_JOB.getAclName(), jobOwner);
  conf.setBoolean(MRConfig.MR_ACLS_ENABLED, false);
  String noAdminUser = "testuser2";

  JobACLsManager aclsManager = new JobACLsManager(conf);
  tmpJobACLs = aclsManager.constructJobACLs(conf);
  final Map<JobACL, AccessControlList> jobACLs = tmpJobACLs;

  UserGroupInformation callerUGI = UserGroupInformation.createUserForTesting(
      noAdminUser, new String[] {});
  // acls off so anyone should have access
  boolean val = aclsManager.checkAccess(callerUGI, JobACL.VIEW_JOB, jobOwner,
      jobACLs.get(JobACL.VIEW_JOB));
  assertTrue("acls off so anyone should have access", val);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestJobAclsManager.java


示例16: testGroups

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test
public void testGroups() {
  Map<JobACL, AccessControlList> tmpJobACLs = new HashMap<JobACL, AccessControlList>();
  Configuration conf = new Configuration();
  String jobOwner = "testuser";
  conf.set(JobACL.VIEW_JOB.getAclName(), jobOwner);
  conf.setBoolean(MRConfig.MR_ACLS_ENABLED, true);
  String user = "testuser2";
  String adminGroup = "adminGroup";
  conf.set(MRConfig.MR_ADMINS, " " + adminGroup);

  JobACLsManager aclsManager = new JobACLsManager(conf);
  tmpJobACLs = aclsManager.constructJobACLs(conf);
  final Map<JobACL, AccessControlList> jobACLs = tmpJobACLs;

  UserGroupInformation callerUGI = UserGroupInformation.createUserForTesting(
   user, new String[] {adminGroup});
  // acls off so anyone should have access
  boolean val = aclsManager.checkAccess(callerUGI, JobACL.VIEW_JOB, jobOwner,
      jobACLs.get(JobACL.VIEW_JOB));
  assertTrue("user in admin group should have access", val);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:TestJobAclsManager.java


示例17: testMaxBlockLocationsNewSplits

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test
public void testMaxBlockLocationsNewSplits() throws Exception {
  TEST_DIR.mkdirs();
  try {
    Configuration conf = new Configuration();
    conf.setInt(MRConfig.MAX_BLOCK_LOCATIONS_KEY, 4);
    Path submitDir = new Path(TEST_DIR.getAbsolutePath());
    FileSystem fs = FileSystem.getLocal(conf);
    FileSplit split = new FileSplit(new Path("/some/path"), 0, 1,
        new String[] { "loc1", "loc2", "loc3", "loc4", "loc5" });
    JobSplitWriter.createSplitFiles(submitDir, conf, fs,
        new FileSplit[] { split });
    JobSplit.TaskSplitMetaInfo[] infos =
        SplitMetaInfoReader.readSplitMetaInfo(new JobID(), fs, conf,
            submitDir);
    assertEquals("unexpected number of splits", 1, infos.length);
    assertEquals("unexpected number of split locations",
        4, infos[0].getLocations().length);
  } finally {
    FileUtil.fullyDelete(TEST_DIR);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:23,代码来源:TestJobSplitWriter.java


示例18: testMaxBlockLocationsOldSplits

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Test
public void testMaxBlockLocationsOldSplits() throws Exception {
  TEST_DIR.mkdirs();
  try {
    Configuration conf = new Configuration();
    conf.setInt(MRConfig.MAX_BLOCK_LOCATIONS_KEY, 4);
    Path submitDir = new Path(TEST_DIR.getAbsolutePath());
    FileSystem fs = FileSystem.getLocal(conf);
    org.apache.hadoop.mapred.FileSplit split =
        new org.apache.hadoop.mapred.FileSplit(new Path("/some/path"), 0, 1,
            new String[] { "loc1", "loc2", "loc3", "loc4", "loc5" });
    JobSplitWriter.createSplitFiles(submitDir, conf, fs,
        new org.apache.hadoop.mapred.InputSplit[] { split });
    JobSplit.TaskSplitMetaInfo[] infos =
        SplitMetaInfoReader.readSplitMetaInfo(new JobID(), fs, conf,
            submitDir);
    assertEquals("unexpected number of splits", 1, infos.length);
    assertEquals("unexpected number of split locations",
        4, infos[0].getLocations().length);
  } finally {
    FileUtil.fullyDelete(TEST_DIR);
  }
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:TestJobSplitWriter.java


示例19: setup

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
@Before
public void setup() throws IOException {
  this.conf = new JobConf();
  this.conf.set(CommonConfigurationKeys.HADOOP_SECURITY_GROUP_MAPPING,
      NullGroupsProvider.class.getName());
  this.conf.setBoolean(MRConfig.MR_ACLS_ENABLED, true);
  Groups.getUserToGroupsMappingService(conf);
  this.ctx = buildHistoryContext(this.conf);
  WebApp webApp = mock(HsWebApp.class);
  when(webApp.name()).thenReturn("hsmockwebapp");
  this.hsWebServices= new HsWebServices(ctx, conf, webApp);
  this.hsWebServices.setResponse(mock(HttpServletResponse.class));

  Job job = ctx.getAllJobs().values().iterator().next();
  this.jobIdStr = job.getID().toString();
  Task task = job.getTasks().values().iterator().next();
  this.taskIdStr = task.getID().toString();
  this.taskAttemptIdStr =
      task.getAttempts().keySet().iterator().next().toString();
}
 
开发者ID:naver,项目名称:hadoop,代码行数:21,代码来源:TestHsWebServicesAcls.java


示例20: configureHighRamProperties

import org.apache.hadoop.mapreduce.MRConfig; //导入依赖的package包/类
/**
 * Sets the high ram job properties in the simulated job's configuration.
 */
@SuppressWarnings("deprecation")
static void configureHighRamProperties(Configuration sourceConf, 
                                       Configuration destConf) {
  // set the memory per map task
  scaleConfigParameter(sourceConf, destConf, 
                       MRConfig.MAPMEMORY_MB, MRJobConfig.MAP_MEMORY_MB, 
                       MRJobConfig.DEFAULT_MAP_MEMORY_MB);
  
  // validate and fail early
  validateTaskMemoryLimits(destConf, MRJobConfig.MAP_MEMORY_MB, 
                           JTConfig.JT_MAX_MAPMEMORY_MB);
  
  // set the memory per reduce task
  scaleConfigParameter(sourceConf, destConf, 
                       MRConfig.REDUCEMEMORY_MB, MRJobConfig.REDUCE_MEMORY_MB,
                       MRJobConfig.DEFAULT_REDUCE_MEMORY_MB);
  // validate and fail early
  validateTaskMemoryLimits(destConf, MRJobConfig.REDUCE_MEMORY_MB, 
                           JTConfig.JT_MAX_REDUCEMEMORY_MB);
}
 
开发者ID:naver,项目名称:hadoop,代码行数:24,代码来源:GridmixJob.java



注:本文中的org.apache.hadoop.mapreduce.MRConfig类示例整理自Github/MSDocs等源码及文档管理平台,相关代码片段筛选自各路编程大神贡献的开源项目,源码版权归原作者所有,传播和使用请参考对应项目的License;未经允许,请勿转载。


鲜花

握手

雷人

路过

鸡蛋
该文章已有0人参与评论

请发表评论

全部评论

专题导读
上一篇:
Java SimpleStrategy类代码示例发布时间:2022-05-21
下一篇:
Java HDFSBlocksDistribution类代码示例发布时间:2022-05-21
热门推荐
阅读排行榜

扫描微信二维码

查看手机版网站

随时了解更新最新资讯

139-2527-9053

在线客服(服务时间 9:00~18:00)

在线QQ客服
地址:深圳市南山区西丽大学城创智工业园
电邮:jeky_zhao#qq.com
移动电话:139-2527-9053

Powered by 互联科技 X3.4© 2001-2213 极客世界.|Sitemap