May 26, 2020 in Amazon Elastic Compute Cloud EC2
Q:

How do I use the Inferentia chip in Inf1 instances in Amazon EC2?

1 Answer

0 votes
May 26, 2020
You can start your workflow by building and training your model in one of the popular ML frameworks such as TensorFlow, PyTorch, or MXNet using GPU instances such as P3 or P3dn. Once the model is trained to your required accuracy, you can use the ML framework’s API to invoke Neuron, a software development kit for Inferentia, to compile the model for execution on Inferentia chips, load it in to Inferentia’s memory, and then execute inference calls. In order to get started quickly, you can use AWS Deep Learning AMIs that come pre-installed with ML frameworks and the Neuron SDK. For a fully managed experience you will be able to use Amazon SageMaker which will enable you to seamlessly deploy your trained models on Inf1 instances.
Click here to read more about Loan/Mortgage
Click here to read more about Insurance

Related questions

0 votes
May 26, 2020 in Amazon Elastic Compute Cloud EC2
...