diff --git a/docs/EN_US/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml b/docs/EN_US/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml index 377e52d0bf9..ac212dfdeac 100644 --- a/docs/EN_US/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml +++ b/docs/EN_US/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml @@ -72,15 +72,15 @@ Introduction - HPCC (High Performance Computing Cluster) Systems is a massive + The HPCC (High Performance Computing Cluster) Systems platform is a massive parallel-processing computing platform that solves Big Data problems. - HPCC Systems platform stores and processes large quantities of + The HPCC Systems platform stores and processes large quantities of data, processing billions of records per second using massive parallel processing technology. Large amounts of data across disparate data sources can be accessed, analyzed, and manipulated in fractions of - seconds. HPCC Systems functions as both a processing and a distributed + seconds. The HPCC Systems platform functions as both a processing and a distributed data storage environment, capable of analyzing terabytes of information. @@ -133,7 +133,7 @@ Clusters - HPCC Systems environment contains clusters which you define and + An HPCC Systems environment contains clusters which you define and use according to your needs. The types of clusters used by HPCC Systems: @@ -431,54 +431,26 @@ Hardware and Software Requirements - This chapter describes some of the hardware and software - requirements in order to run the HPCC Systems platform. HPCC Systems is - designed to run on commodity hardware, which makes building and - maintaining large scale (petabytes) clusters economically feasible. When - planning your cluster hardware, you will need to balance a number of - considerations specific to your needs. - - This section provides some insight into the hardware and - infrastructure that HPCC Systems works well on. This is not an exclusive - comprehensive set of instructions, nor a mandate on what hardware you must - have. Consider this as a guide to use when looking to implement or scale - your HPCC Systems platform. These suggestions should be taken into - consideration for your specific enterprise needs. - - - - - - - - - + This chapter provides an overview of the hardware and software requirements for running the HPCC Systems platform optimally. While these requirements were significant when the HPCC Systems platform was first deployed many years ago, there have been substantial improvements in hardware since then. The platform now supports virtual containers and cloud deployments, making the requirements less significant even for large-scale (petabytes) bare-metal deployments. In fact, the HPCC Systems platform should perform satisfactorily on most modern hardware configurations. + Hardware and Components This section provides some insight as to what sort of hardware and - infrastructure optimally HPCC Systems works well on. This is not an + infrastructure optimally the HPCC Systems platform works well on. This is not an exclusive comprehensive set of instructions, nor a mandate on what hardware you must have. Consider this as a guide to use when looking to implement or scale your HPCC Systems platform. These suggestions should be taken into consideration for your specific enterprise needs. - HPCC Systems is designed to run on commodity hardware, which makes + The HPCC Systems platform is designed to run on commodity hardware, which makes building and maintaining large scale (petabytes) clusters economically feasible. When planning your cluster hardware, you will need to balance a number of considerations, including fail-over domains and potential performance issues. Hardware planning should include distributing HPCC Systems across multiple physical hosts, such as a cluster. Generally, one - type of best practice is to run HPCC Systems processes of a particular + type of best practice is to run the HPCC Systems platform processes of a particular type, for example Thor, Roxie, or Dali, on a host configured specifically for that type of process. @@ -495,7 +467,7 @@ larger physical servers to run multiple Thor slave nodes per physical server. - It is important to note that HPCC Systems by nature is a parallel + It is important to note that the HPCC Systems platform by nature is a parallel processing system and all Thor slave nodes will be exercising at precisely the same time. So when allocating more than one HPCC Systems Thor slave per physical machine assure that each slave meets the @@ -546,12 +518,12 @@ Dali and Sasha Hardware Configurations - HPCC Systems Dali processes store cluster metadata in RAM. For + The HPCC Systems platform Dali processes store cluster metadata in RAM. For optimal efficiency, provide at least 48GB of RAM, 6 or more CPU cores, 1Gb/sec network interface and a high availability disk for a single HPCC - Systems Dali. The HPCC Systems Dali processes are one of the few native + Systems Dali. The HPCC Systems platform Dali processes are one of the few native active/passive components. Using standard "swinging disk" clustering is - recommended for a high availability setup. For a single HPCC Systems + recommended for a high availability setup. For a single HPCC Systems platform Dali process, any suitable High Availability (HA) RAID level is fine. @@ -683,7 +655,7 @@ large files, you will need a tool that supports the secure copy protocol, something like a WinSCP. - For more information about HPCC Systems data handling see the + For more information about the HPCC Systems platform data handling see the HPCC Systems® Data Handling and the HPCC Systems® Data Tutorial @@ -804,7 +776,7 @@ Backupnode - Backupnode is a tool that is packaged with HPCC Systems + Backupnode is a tool that is packaged with the HPCC Systems platform. Backupnode allows you to backup Thor nodes on demand or in a script. You can also use backupnode regularly in a crontab or by adding a backupnode component with Configuration Manager to your @@ -914,9 +886,9 @@ Log Files - HPCC Systems provides a wealth of information which can be used to + The HPCC Systems platform provides a wealth of information which can be used to debug, track transactions, application performance, and troubleshooting - purposes. You can review HPCC Systems messages as they are reported and + purposes. You can review the HPCC Systems platform messages as they are reported and captured in the log files. Log files can help you in understanding what is occurring on the system and useful in troubleshooting. @@ -925,7 +897,7 @@ HPCC Systems component files are written to /var/log/HPCCSystems (default location). You - can optionally configure your HPCC Systems to write the logs to a + can optionally configure your HPCC Systems platform to write the logs to a different directory. You should know where the log files are, and refer to the logs first when troubleshooting any issues. @@ -944,7 +916,7 @@ Understanding the log files, and what is normally reported in - the log files, helps in troubleshooting HPCC Systems clusters. + the log files, helps in troubleshooting the HPCC Systems platform clusters. As part of routine maintenance you may want to backup, archive, and remove the older log files. Some log files can grow quite large @@ -1124,7 +1096,7 @@ System Configuration and Management - HPCC Systems require configuration. The Configuration Manager tool + The HPCC Systems platform require configuration. The Configuration Manager tool (configmgr) included with the system software is a valuable piece of setting up your HPCC Systems platform. The Configuration Manager is a graphical tool provided that can be used to configure your system. @@ -1177,9 +1149,9 @@ Environment.conf - A component of HPCC Systems on bare-metal configuration is the + A component of the HPCC Systems platform on bare-metal configuration is the environment.conf file. Environment.conf contains some global definitions - that the configuration manager uses to configure the HPCC Systems. In + that the configuration manager uses to configure the HPCC Systems platform. In most cases, the defaults are sufficient. The environment.conf file only works for bare-metal deployments. @@ -1501,7 +1473,7 @@ lock=/var/lock/HPCCSystems highest priority, and a value of 19 is the lowest. The default environment.conf file is delivered with the nice - value disabled. If you wish to use nice to prioritize HPCC Systems + value disabled. If you wish to use nice to prioritize HPCC Systems platform processes, you need to modify the environment.conf file to enable nice. You can also adjust the nice value in environment.conf. @@ -1762,7 +1734,7 @@ HPCCPrivateKeyFile=/keyfilepath/keyfile The performance of your system can vary depending on how some components interact. One area which could impact performance is the relationship with users, groups, and Active Directory. If possible, - having a separate Active Directory specific to HPCC Systems could be a + having a separate Active Directory specific to the HPCC Systems platform could be a good policy. There have been a few instances where just one Active Directory servicing many, diverse applications has been less than optimal. @@ -1953,11 +1925,11 @@ HPCCPrivateKeyFile=/keyfilepath/keyfile Best Practices This chapter outlines various forms of best practices established by - long time HPCC Systems users and administrators running HPCC Systems in a + long time HPCC Systems users and administrators running the HPCC Systems platform in a high availability, demanding production environment. While it is not required that you run your environment in this manner, as your specific requirements may vary. This section provides some best practice - recommendations established after several years of running HPCC Systems in + recommendations established after several years of running the HPCC Systems platform in a demanding, intense, production environment. @@ -2662,8 +2634,7 @@ heapUseTransparentHugePages System Resources - There are additional resources available for the HPCC Systems - System. + There are additional resources available for the HPCC Systems platform. HPCC Systems Resources @@ -2687,7 +2658,7 @@ heapUseTransparentHugePages Additional Resources - Additional help with HPCC Systems and Learning ECL is also + Additional help with the HPCC Systems platform and Learning ECL is also available. There are online courses available. Go to : - - - - Installing the HPCC Systems<superscript>®</superscript> Platform: Hardware Module - - - - - - - - - Boca Raton Documentation Team - - - - We welcome your comments and feedback about this document via - email to docfeedback@hpccsystems.com - - Please include Documentation - Feedback in the subject line and reference the document name, - page numbers, and current Version Number in the text of the - message. - - LexisNexis and the Knowledge Burst logo are registered trademarks - of Reed Elsevier Properties Inc., used under license. - - HPCC Systems is a registered trademark of LexisNexis Risk Data - Management Inc. - - Other products, logos, and services may be trademarks or - registered trademarks of their respective companies. All names and - example data used in this manual are fictitious. Any similarity to - actual persons, living or dead, is purely coincidental. - - - - - - - - FooterInfo Failed to load - - - - - - - - DateVer Failed to load - - - - - HPCC Systems - - - - - Copyright Failed to load - - - - - - - - - - - - - Hardware and Software Requirements - - This section describes some hardware and software requirements or - recommendations in order to run HPCC Systems. Essentially the HPCC Systems platform is - designed to run on commodity hardware, and would probably work well on - almost any hardware. To really take advantage of the power of an HPCC - Systems platform you should deploy your system on more modern advanced - hardware. - - Hardware and software technology are constantly changing and - improving, therefore the latest most up to date requirements and - recommendation are available on the HPCC Systems Portal. The System - requirements page describes in detail the latest platform - requirements. - - http://hpccsystems.com/permlink/requirements - - - Network Switch - - The network switch is a significant component of the HPCC - Systems platform. - - - Switch requirements - - - - Sufficient number of ports to allow all nodes to be - connected directly to it; - - - - IGMP v.2 support  - - - - IGMP snooping support - - - - Ideally your HPCC Systems will perform better when each node is - connected directly into a single switch. You should be able to provide - a port for each node on a single switch to optimize system - performance. Your switch size should correspond to the size of your - system. You would want to ensure that the switch you use has enough - capacity for each node to be plugged into its own port. - - - - Switch additional recommended features - - - - Gigabit speed - - - - Non-blocking/Non-oversubscribed backplane - - - - Low latency (under 35usec) - - - - Layer 3 switching - - - - Managed and monitored (SNMP is a plus) - - - - Port channel (port bundling) support - - - - Generally, higher-end, higher throughput switches are also going - to provide better performance. For larger systems, a high-capacity - managed switch that can be configured and tuned for HPCC Systems efficiency is - the best choice. - - - - - Load Balancer - - A load balancer distributes network traffic across a number of - servers. Each Roxie Node is capable of receiving requests and returning - results. Therefore, a load balancer distributes the load in an efficient - manner to get the best performance and avoid a potential - bottleneck. - - - Load Balancer Requirements - - - Minimum requirements - - - - Throughput: 1 Gigabit - - - - Ethernet ports: 2 - - - - Balancing Strategy: Round Robin - - - - - - Standard requirements - - - - Throughput: 8 Gbps - - - - Gigabit Ethernet ports: 4 - - - - Balancing Strategy: Flexible (F5 iRules or - equivalent) - - - - - - Recommended capabilities - - - - Ability to provide cyclic load rotation (not load - balancing). - - - - Ability to forward SOAP/HTTP traffic - - - - Ability to provide triangulation/n-path routing (traffic - incoming through the load balancer to the node, replies sent - out the via the switch). - - - - Ability to treat a cluster of nodes as a single entity - (for load balancing clusters not nodes) - - or - - - - Ability to stack or tier the load balancers for multiple - levels if not. - - - - - - - - Nodes-Hardware - - An HPCC Systems platform can run as a single node system or a multi node - system. - - These hardware recommendations are intended for a multi-node - production system. A test system can use less stringent specifications. - Also, while it is easier to manage a system where all nodes are - identical, this is not required. However, it is important to note that - your system will only run as fast as its slowest node. - - - Node minimum requirements - - - - Pentium 4 or newer CPU - - - - 32-bit - - - - 1GB RAM per slave - - (Note: If you configure more than 1 slave per node, memory - is shared. For example, if you want 2 slaves per node with each - having 4 GB of memory, the server would need 8 GB total.) - - - - One Hard Drive (with sufficient free space to handle the - size of the data you plan to process) or Network Attached - Storage. - - - - 1 GigE network interface - - - - - - Node recommended specifications - - - - Dual Core i7 CPU (or better) - - - - 64-bit - - - - 4 GB RAM (or more) per slave - - - - 1 GigE network interface - - - - PXE boot support in BIOS - - PXE boot support is recommended so you can manage OS, - packages, and other settings when you have a large system - - - - Optionally IPMI and KVM over IP support - - For Roxie nodes: - - - - Two 10K RPM (or faster) SAS Hard Drives - - Typically, drive speed is the priority for Roxie - nodes - - For Thor nodes: - - - - Two 7200K RPM (or faster) SATA Hard Drives (Thor) - - - - Optionally 3 or more hard drives can be configured in a - RAID 5 container for increased performance and - availability - - Typically, drive capacity is the priority for Thor - nodes - - - - - - - Nodes-Software - - All nodes must have the identical operating systems. We recommend - all nodes have identical BIOS settings, and packages installed. This - significantly reduces variables when troubleshooting. It is easier to - manage a system where all nodes are identical, but this is not - required. - - - Operating System Requirements - - Binary installation packages are available for many Linux - Operating systems. HPCC Systems platform requirements are readily - available on the HPCC Systems® Portal. - - https://hpccsystems.com/training/documentation/system-requirements - - - - Dependencies - - Installing HPCC Systems on your system depends on having required - component packages installed on the system. The required dependencies - can vary depending on your platform. In some cases the dependencies - are included in the installation packages. In other instances the - installation may fail, and the package management utility will prompt - you for the required packages. Installation of these packages can vary - depending on your platform. For details of the specific installation - commands for obtaining and installing these packages, see the commands - specific to your Operating System. - - Note: - - - For CentOS installations, the Fedora EPEL repository is - required. - - - - - - - SSH Keys - - The HPCC Systems components use ssh keys to authenticate each other. - This is required for communication between nodes. A script to generate - keys has been provided .You should run that script and distribute the - public and private keys to all nodes after you have installed the - packages on all nodes, but before you configure a multi-node - HPCC Systems platform. - - - - As root (or sudo as shown below), generate a new key using - this command: - - sudo /opt/HPCCSystems/sbin/keygen.sh - - - - Distribute the keys to all nodes. From the /home/hpcc/.ssh directory, copy these - three files to the same directory (/home/hpcc/.ssh) on each node: - - - - id_rsa - - - - id_rsa.pub - - - - authorized_keys - - - - Make sure that files retain permissions when they are - distributed. These keys need to be owned by the user "hpcc". - - - - - - - User Workstation Requirements - - - - Running the HPCC Systems platform requires communication from your - user workstation with a browser to the HPCC Systems platform. You will use it to - access ECL Watch--a Web-based interface to your HPCC Systems platform. ECL - Watch enables you to examine and manage many aspects of the HPCC Systems platform and - allows you to see information about jobs you run, data files, and - system metrics. - - Use one of the supported web browsers with Javascript - enabled. - - - - Internet Explorer® 11 (or later) - - - - Firefox 3.0 (or later.) - - - - - - Google Chrome 10 (or later) - - - - Safari 10 (or later) - - - - If browser security is set to High, you should add ECLWatch as a Trusted - Site to allow Javascript execution. - - - - - - Install the ECL IDE - - The ECL IDE (Integrated Development Environment) is the tool - used to create queries into your data and ECL files with which to - build your queries. - - Download the ECL IDE from the HPCC Systems web portal. - http://hpccsystems.com - - You can find the ECL IDE and Client Tools on this page using - the following URL: - - http://hpccsystems.com/download/free-community-edition/ecl-ide - - The ECL IDE was designed to run on Windows machines. See the - appendix for instructions on running on Linux workstations using - Wine. - - - - Microsoft VS 2008 C++ compiler (either Express or Professional - edition). This is needed if you are running Windows and want to - compile queries locally. This allows you to compile and run ECL code - on your Windows workstation. - - - - GCC. This is needed if you are running under Linux and want to - compile queries locally on a standalone Linux machine, (although it - may already be available to you since it usually comes with the - operating system). - - - - - diff --git a/docs/PT_BR/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml b/docs/PT_BR/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml index c3dda9d57ef..3c756f139f3 100644 --- a/docs/PT_BR/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml +++ b/docs/PT_BR/HPCCSystemAdmin/HPCCSystemAdministratorsGuide.xml @@ -442,6 +442,11 @@ for implementar ou dimensionar seu HPCC System. As sugestões devem ser levadas em conta de acordo com as suas necessidades empresariais específicas. + + diff --git a/docs/PT_BR/Installing_and_RunningTheHPCCPlatform/Inst-Mods/Hardware.xml b/docs/PT_BR/Installing_and_RunningTheHPCCPlatform/Inst-Mods/Hardware.xml deleted file mode 100644 index 2d9c6a1c96e..00000000000 --- a/docs/PT_BR/Installing_and_RunningTheHPCCPlatform/Inst-Mods/Hardware.xml +++ /dev/null @@ -1,509 +0,0 @@ - - - - - Instalando a plataforma HPCC: Módulo de Hardware - - - - - - - - - Equipe de documentação de Boca Raton - - - - Sua opinião e comentários sobre este documento são muito - bem-vindos e podem ser enviados por e-mail para - docfeedback@hpccsystems.com - - Inclua a frase Feedback sobre - documentação na linha de assunto e indique o nome do - documento, o número das páginas e número da versão atual no corpo da - mensagem. - - LexisNexis e o logotipo Knowledge Burst são marcas comerciais - registradas da Reed Elsevier Properties Inc., usadas sob licença. - - HPCC Systems é uma marca comercial registrada da LexisNexis Risk - Data Management Inc. - - Os demais produtos, logotipos e serviços podem ser marcas - comerciais ou registradas de suas respectivas empresas. Todos os nomes e - dados de exemplo usados neste manual são fictícios. Qualquer semelhança - com pessoas reais, vivas ou mortas, é mera coincidência. - - - - - - - - - HPCC Systems - - - - - - - - - - - - Requerimento de Hardware e Software - - Esta seção descreve alguns requisitos ou recomendações de hardware e - software para executar o HPCC. Essencialmente, o sistema HPCC foi - projetado para ser executado em hardware comum, podendo funcionar em quase - todos os tipos de hardware. Para obter um benefício real de toda a - capacidade do sistema HPCC, é preciso implementar o HPCC System em - hardware modernos e mais avançados. - - As tecnologias de hardware e software estão mudando e sendo - aperfeiçoadas constantemente. Em função disso, os requisitos e as - recomendações mais recentes e atualizadas estão disponíveis no Portal do - HPCC Systems A página Requisitos do sistema descreve de forma detalhada os - requisitos mais recentes da plataforma. - - http://hpccsystems.com/permlink/requirements - - - Switch de Rede - - A switch de rede é um componente importante do HPCC System. - - - Requerimento do Switch - - - - Número suficiente de portas para permitir que todos os nós - sejam conectados diretamente a ele; - - - - Suporte à IGMP v.2  - - - - Suporte à monitoração IGMP - - - - Seu HPCC System supostamente apresentará um melhor desempenho - quando cada nó estiver conectado diretamente a um único switch. Você - precisa fornecer uma porta para cada nó em um único switch para - otimizar o desempenho do sistema. O tamanho do seu switch deve - corresponder ao tamanho do seu sistema. É importante assegurar que a - chave utilizada tenha capacidade suficiente para que cada nó seja - conectado à sua própria porta. - - - - Recursos adicionais recomendados para o Switch - - - - Velocidade em Gigabit - - - - Backplane sem bloqueio e não sobrecarregado - - - - Baixa latência (menos de 35usec) - - - - Comutação da camada 3 - - - - Gerenciado e monitorado (SNMP é uma vantagem a - mais) - - - - Suporte de canal de porta (agrupamento de portas) - - - - Normalmente, os swithcs de maior qualidade e produtividade - também oferecerão melhor desempenho. Em sistemas maiores, a melhor - opção é usar um switch gerenciado de alta capacidade que possa ser - configurada e ajustada com base na eficiência do HPCC Systems. - - - - - Balanceamento de Carga - - Um balanceador de carga distribui o tráfego da rede entre vários - servidores. Cada nó do Roxie é capaz de receber solicitações e de - retornar resultados. Consequentemente, um balanceador de carga distribui - a carga de maneira eficiente para obter o melhor desempenho e evitar um - possível gargalo. - - - Requerimento do Balanceamento de Carga - - - Requisitos mínimos de entrada - - - - Transferência: 1 Gigabit - - - - Portas Ethernet: 2 - - - - Estratégia de balanceamento: Round Robin - - - - - - Requisitos padrão - - - - Transferência: 8Gbps - - - - Portas Gigabit Ethernet: 4 - - - - Estratégia de balanceamento: Flexível (iRules F5 ou - equivalente) - - - - - - Capacidade Recomendada - - - - Capacidade de fornecer rotação de carga cíclica (e não o - balanceamento de carga) - - - - Capacidade de encaminhar o tráfego SOAP/HTTP - - - - Capacidade de fornecer roteamento de triangulação/n-path - (tráfego de entrada através do balanceador de carga para o nó, - respostas enviadas através do switch). - - - - Capacidade de tratar um cluster de nós como uma entidade - única (para clusters de balanceamento de carga, não os - nós) - - ou - - - - Capacidade de empilhar ou estruturar em camadas os - balanceadores de carga em vários níveis. - - - - - - - - Hardware-Nós - - Um HPCC System pode ser executado como um sistema de nó único ou - de nós múltiplos. - - Essas recomendações de hardware destinam-se a um sistema de - produção de vários nós. Um sistema de teste pode usar especificações - menos rigorosas. Além disso, embora seja mais fácil gerenciar um sistema - onde todos os nós sejam idênticos, isso não é obrigatório. Porém, é - importante observar que seu sistema será executado na mesma velocidade - de seu nó mais lento. - - - Requerimento Mínimo para um nó - - - - CPU Pentium 4 ou mais recente - - - - 32-bit - - - - 1GB de RAM por escravo - - (Observação: se você configurar mais de 1 escravo por nó, a - memória será compartilhada. Por exemplo, se desejar 2 escravos por - nó com 4 GB de memória cada, o servidor precisará de 8 GB de - memória total.) - - - - Um disco rígido (com espaço livre suficiente para lidar com - o tamanho dos dados que você pretende processar) ou um - armazenamento conectado à rede. - - - - Interface de rede de 1 GigE - - - - - - Especificações recomendadas para os nós - - - - CPU Dual Core i7 (ou melhor) - - - - 64-bit - - - - 4 GB de RAM (ou mais) por escravo - - - - Interface de rede de 1 GigE - - - - Suporte à inicialização PXE no BIOS - - O suporte à inicialização PXE é recomendado para que você - possa gerenciar os pacotes do OS (SO) e outras configurações - quando tiver um sistema maior. - - - - Opcionalmente, suporte para IPMI e KVM sobre IP - - Para os nós do - Roxie: - - - - Dois discos rígidos SAS de 10K RPM (ou mais - rápidos) - - Normalmente, a velocidade do disco é uma prioridade para - os nós do Roxie - - Para os nós do - Thor: - - - - Dois discos rígidos SATA de 7200K RPM (Thor) - - - - Opcionalmente, 3 ou mais discos rígidos podem ser - configurados em um contêiner RAID 5 para melhorar o desempenho e - a disponibilidade. - - Normalmente, a capacidade do disco é uma prioridade para - os nós do Thor - - - - - - - Software-Nós - - Todos os nós devem ter sistemas operacionais idênticos. - Recomendamos que todos os nós tenham instalados configurações BIOS e - pacotes idênticos. Isso diminui significativamente as variáveis em caso - de solução de problemas. Embora seja mais fácil gerenciar um sistema - onde todos os nós são idênticos, isso não é obrigatório. - - - Requerimentos do Sistema Operacional - - Os pacotes de instalação binária estão disponíveis para diversos - sistemas operacionais Linux. Os requisitos da plataforma do HPCC - System estão prontamente disponíveis no Portal do HPCC. - - https://hpccsystems.com/training/documentation/system-requirements - - - - Dependências - - Para instalar o HPCC em seu sistema, é preciso ter os pacotes de - componentes obrigatórios instalados no sistema. As dependências - obrigatórias podem variar de acordo com a sua plataforma. Em alguns - casos, as dependências estão incluídas nos pacotes de instalação. Em - outras situações, a instalação pode não ser concluída com sucesso, e o - utilitário de gerenciamento do pacote solicitará os pacotes - obrigatórios. A instalação desses pacotes pode variar de acordo com a - sua plataforma. Para detalhes sobre comandos de instalação específicos - para obter e instalar esses pacotes, consulte os comandos específicos - do seu Sistema Operacional. - - Observação: - - - Para instalações CentOS, o repositório Fedora EPEL é - obrigatório. - - - - - - - Chaves SSH - - Os componentes do HPCC usam chaves ssh para autenticar uns aos - outros Isso é obrigatório para a comunicação entre os nós. Fornecemos - um script para geração de chaves. Você precisa executar esse script e - distribuir as chaves públicas e privadas a todos os nós após ter - instalado os pacotes em todos eles, porém antes de configurar um HPCC - de nós múltiplos. - - - - Como usuário root (ou sudo como mostrado abaixo), gere uma - nova chave usando este comando: - - sudo /opt/HPCCSystems/sbin/keygen.sh - - - - Distribua as chaves para todos os nós. No diretório - /home/hpcc/.ssh , copie esses - três arquivos para o mesmo diretório (/home/hpcc/.ssh) em cada nó: Em todos os - nós - - - - id_rsa - - - - id_rsa.pub - - - - authorized_keys - - - - Lembre-se de que os arquivos devem reter as permissões ao - serem distribuídos. Essas chaves precisam pertencer ao usuário - “hpcc”. - - - - - - - Requerimentos da estação de trabalho - - - - A execução da plataforma HPCC requer comunicação desde a - estação de trabalho do usuário com um navegador até o HPCC. Isso - será usado para acessar o ECL Watch -- uma interface com base na Web - para seu HPCC System. O ECL Watch permite examinar e gerenciar os - vários aspectos do HPCC e permite ver informações sobre tarefas - executadas, arquivos de dados e métricas de sistema. - - Use um dos navegadores Web compatíveis com Javascript - habilitado. - - - - Internet Explorer® 11 (ou mais recente) - - - - Firefox 3.0 (ou mais recente) - - - - - - Google Chrome 10 (ou mais recente) - - - - Se a segurança do navegador estiver configurada para Alta, você precisa adicionar o ECLWatch como - site confiável para permitir que o Javascript seja executado. - - - - - - Instale o ECL IDE - - O ECL IDE (Ambiente de desenvolvimento integrado) é uma - ferramenta usada para criar consultas em seus dados e arquivos ECL - com os quais suas consultas serão compiladas. - - Baixe o ECL IDE no portal do HPCC Systems no endereço - http://hpccsystems.com - - Você encontra o ECL IDE e as Ferramentas do Client nesta - página usando o URL: - - http://hpccsystems.com/download/free-community-edition/ecl-ide - - O ECL IDE foi projetado para ser executado em máquinas - Windows. Consulte a seção Anexos para obter instruções sobre como - executar em estações de trabalho Linux usando o Wine. - - - - Compilador Microsoft VS 2008 C++ (edição Express ou - Professional). Isso é necessário caso você esteja executando o - Windows e queira compilar consultas localmente. Isso permite - compilar e executar código ECL em sua estação de trabalho - Windows. - - - - GCC Isso é necessário se você estiver executando o Linux e - deseja compilar consultas localmente em uma máquina Linux autônoma - (pode ser que isso já esteja disponível para você, já que - normalmente acompanha o sistema operacional). - - - - -