Skip to content

Commit

Permalink
Sync master with 'imz-r2.4'
Browse files Browse the repository at this point in the history
Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
  • Loading branch information
ashahba committed Jul 26, 2021
2 parents c0d9a4f + ad4cde7 commit 254b21a
Show file tree
Hide file tree
Showing 1,615 changed files with 113,590 additions and 34,783 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,5 @@ test_data/
download_glue_data.py
data/
output/
**/**.whl
**/**.tar.gz
13 changes: 7 additions & 6 deletions CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
# Lines starting with '#' are comments.
# Each line is a file pattern followed by one or more owners.

# These owners will be the default owners for everything in the repo.
* @mlukaszewski @claynerobison @chuanqi129 @agramesh1 @justkw
# These owners will be the default owners for everything in the repo,
# but PR owner should be able to assign other contributors when appropriate
* @ashahba @claynerobison @dmsuehir
datasets @ashahba @claynerobison @dzungductran
docs @claynerobison @mhbuehler
k8s @ashahba @dzungductran @kkasravi
models @agramesh1 @ashraf-bhuiyan @riverliuintel @wei-v-wang

# Order is important. The last matching pattern has the most precedence.
# So if a pull request only touches javascript files, only these owners
# will be requested to review.
#*.js @octocat @github/js

# You can also use email addresses if you prefer.
#docs/* docs@example.com

# paddlepaddle
**/paddlepaddle/** @kbinias @sfraczek @Sand3r- @lidanqing-intel @ddokupil @pmajchrzak @wojtuss
**/PaddlePaddle/** @kbinias @sfraczek @Sand3r- @lidanqing-intel @ddokupil @pmajchrzak @wojtuss
25 changes: 12 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,6 @@
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.

Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® oneContainer Portal](https://software.intel.com/containers).
Intel Model Zoo is also bundled as a part of
[Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html) (AI Kit).

## Purpose of the Model Zoo

Expand All @@ -17,20 +15,21 @@ For any performance and/or benchmarking information on specific Intel platforms,

## How to Use the Model Zoo

### Getting Started

### Getting Started using AI Kit
- The Intel Model Zoo is released as a part of the [Intel® AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html)
which provides a consolidated package of Intel’s latest deep and machine learning optimizations
all in one place for ease of development. Along with Model Zoo, the toolkit also includes Intel
optimized versions of deep learning frameworks (TensorFlow, PyTorch) and high performing Python
libraries to streamline end-to-end data science and AI workflows on Intel architectures.
- The [documentation here](/docs/general/tensorflow/AIKit.md) has instructions on how to get to
the Model Zoo's conda environments and code directory within AI Kit.
- There is a table of TensorFlow models with links to instructions on how to run the models [here](/benchmarks/README.md).
- To get started you can refer to the [ResNet50 FP32 Inference code sample.](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8)

### Getting Started without AI Kit
- If you know what model you are interested in, or if you want to see a full list of models in the Model Zoo, start **[here](/benchmarks)**.
- For framework best practice guides, and step-by-step tutorials for some models in the Model Zoo, start **[here](/docs)**.

- AI Kit provides a consolidated package of Intel’s latest deep and machine
learning optimizations all in one place for ease of development. Along with
Model Zoo, the toolkit also includes Intel optimized versions of deep
learning frameworks (TensorFlow, PyTorch) and high performing Python libraries
to streamline end-to-end data science and AI workflows on Intel architectures.

|[Download AI Kit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit/) |[AI Kit Get Started Guide](https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html) |
|---|---|

### Directory Structure
The Model Zoo is divided into four main directories:
- **[benchmarks](/benchmarks)**: Look here for sample scripts and complete instructions on downloading and running each Intel-optimized pre-trained model.
Expand Down
107 changes: 60 additions & 47 deletions benchmarks/README.md

Large diffs are not rendered by default.

19 changes: 15 additions & 4 deletions benchmarks/common/base_model_init.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
import glob
import json
import os
import sys
import time


Expand Down Expand Up @@ -93,7 +94,7 @@ def __init__(self, args, custom_args=[], platform_util=None):
+ " --map-by ppr:" + str(pps) + ":socket:pe=" + split_a_socket + " --cpus-per-proc " \
+ split_a_socket + " " + self.python_exe

def run_command(self, cmd):
def run_command(self, cmd, replace_unique_output_dir=None):
"""
Prints debug messages when verbose is enabled, and then runs the
specified command.
Expand All @@ -118,7 +119,8 @@ def run_command(self, cmd):
"the list of cpu nodes could not be retrieved. Please ensure "
"that your system has numa nodes and numactl is installed.")
else:
self.run_numactl_multi_instance(cmd)
self.run_numactl_multi_instance(
cmd, replace_unique_output_dir=replace_unique_output_dir)
else:
if self.args.verbose:
print("Running: {}".format(str(cmd)))
Expand All @@ -136,7 +138,7 @@ def group_cores(self, cpu_cores_list, cores_per_instance):
end_list.append(cpu_cores_list[-count:]) if count != 0 else end_list
return end_list

def run_numactl_multi_instance(self, cmd):
def run_numactl_multi_instance(self, cmd, replace_unique_output_dir=None):
"""
Generates a series of commands that call the specified cmd with multiple
instances, where each instance uses the a specified number of cores. The
Expand Down Expand Up @@ -195,7 +197,15 @@ def run_numactl_multi_instance(self, cmd):
"numactl --localalloc --physcpubind={1}").format(
len(core_list), ",".join(core_list))
instance_logfile = log_filename_format.format("instance" + str(instance_num))
instance_command = "{} {}".format(prefix, cmd)

unique_command = cmd
if replace_unique_output_dir:
# Swap out the output dir for a unique dir
unique_dir = os.path.join(replace_unique_output_dir,
"instance_{}".format(instance_num))
unique_command = unique_command.replace(replace_unique_output_dir, unique_dir)

instance_command = "{} {}".format(prefix, unique_command)
multi_instance_command += "{} >> {} 2>&1 & \\\n".format(
instance_command, instance_logfile)
instance_logfiles.append(instance_logfile)
Expand All @@ -209,6 +219,7 @@ def run_numactl_multi_instance(self, cmd):

# Run the multi-instance command
print("\nMulti-instance run:\n" + multi_instance_command)
sys.stdout.flush()
os.system(multi_instance_command)

# Wait to ensure that log files have been written
Expand Down
Loading

0 comments on commit 254b21a

Please sign in to comment.