Add documentation and README, use Upsampling2D rather than image Resizing layer
This commit is contained in:
86
README.md
Normal file
86
README.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# Fast Depth TF
|
||||
|
||||
Tensorflow 2.0 Implementation of Fast Depth.
|
||||
|
||||
Original Implementation in PyTorch is here: https://github.com/dwofk/fast-depth
|
||||
|
||||
This code has been tested with Tensorflow 2.4.1, however any version of Tensorflow >2 should work
|
||||
|
||||
The model has also been successfully optimised using the OpenVINO model optimiser
|
||||
|
||||
## Basic Usage
|
||||
|
||||
To train and evaluate the model on the nyu_v2 dataset, simply run:
|
||||
|
||||
`python main.py`
|
||||
|
||||
WARNING: This will download nyu_v2 which is ~100gb when including archived and extracted files, plus another 70gb for
|
||||
generated examples.
|
||||
|
||||
The following sample demonstrates creating a FastDepth model that can later be used for inference, training or
|
||||
evaluation.
|
||||
|
||||
```python
|
||||
import fast_depth_functional as fd
|
||||
|
||||
# No Pretrained weights
|
||||
model = fd.mobilenet_nnconv5()
|
||||
|
||||
# Imagenet weights
|
||||
model = fd.mobilenet_nnconv5(weights='imagenet')
|
||||
|
||||
# Load trained model from file
|
||||
model = fd.load_model('my_fastdepth_model')
|
||||
```
|
||||
|
||||
### Train
|
||||
|
||||
Training with the NYU dataset is as simple as running the following:
|
||||
WARNING: This will download ~30gb and extra ~70gb if you haven't downloaded it already. It also takes a long time to
|
||||
prepare the examples (>1 hour)
|
||||
|
||||
```python
|
||||
import fast_depth_functional as fd
|
||||
|
||||
model = fd.mobilenet_nnconv5(weights='imagenet')
|
||||
|
||||
# Train then save the model as keras h5 format
|
||||
fd.train(model, save_file='fast_depth')
|
||||
|
||||
# A custom dataset can be passed in if required
|
||||
fd.train(model, dataset=my_dataset)
|
||||
```
|
||||
|
||||
### Evaluate
|
||||
|
||||
Evaluation is similar to training. The nyu dataset validation split will be used by default, and if you trained as shown
|
||||
above, the dataset will have already been downloaded.
|
||||
|
||||
```python
|
||||
import fast_depth_functional as fd
|
||||
|
||||
model = fd.load_model('fast_depth')
|
||||
fd.compile(model)
|
||||
fd.evaluate(model)
|
||||
|
||||
# A custom dataset for evaluation is supported
|
||||
fd.evaluate(model, dataset=my_evaluation_dataset)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Windows GPU Fix
|
||||
|
||||
If you are using Windows and encounter an error opening cudnn (you should see CUDNN_STATUS_ALLOC_FAILED somewhere before
|
||||
the error), first check you have correctly installed CUDA toolkit and cuDNN. If you have, then run the Windows GPU fix
|
||||
included in this repo:
|
||||
|
||||
```python
|
||||
import fast_depth_functional as fd
|
||||
|
||||
# Windows GPU Fix
|
||||
fd.fix_windows_gpu()
|
||||
```
|
||||
|
||||
More information about this error can be found here:
|
||||
https://forums.developer.nvidia.com/t/could-not-create-cudnn-handle-cudnn-status-alloc-failed/108261
|
||||
Reference in New Issue
Block a user