Keyphrases
Latency
100%
Deep Neural Network
100%
Limited Resources
100%
In-memory
100%
Dynamic Deep Neural Network
100%
Dynamic Neural Network
100%
Resource Allocation
50%
Network Structure
50%
Speed-accuracy
50%
Hardware Resources
50%
Hardware Efficiency
50%
Compress
50%
Magnetic Random Access Memory
50%
Compression Method
50%
Hardware Accelerator
50%
Dynamic Computation
50%
Fixed Network
50%
Channel Selection
50%
Number of Channels
50%
Background Modeling
50%
Pruning
50%
Non-uniform Channel
50%
Network Sampling
50%
Speed-power
50%
IoT Devices
50%
Neural Network Structure
50%
Deep Neural Network Structure
50%
Spin-orbit Torque
50%
Weight Quantization
50%
Convolution Decomposition
50%
Deep Neural Network Compression
50%
Model Deployment
50%
Memory-based Processing
50%
Computing Complexity
50%
Computer Science
Deep Neural Network
100%
Neural Network
100%
In-Memory Processing
100%
Neural Network Architecture
40%
Resource Allocation
20%
Hardware Resource
20%
Internet of Things
20%
Random Access Memory
20%
Subnetwork
20%
Hardware Accelerator
20%
Network Structures
20%
Quantization (Signal Processing)
20%
Background Model
20%
Deployment Model
20%