Please wait
FuelAI allows the user to label objects at various points of a video and uses interpolation to retrieve the frames between user drawn points
With over 50 classes to choose from, a user can select from a variety of ontology classes. Choose one based on the object you see.
FuelAI offers automatic tracking within the media player. Choose one of our tracking types and watch the magic happen.
FuelAI keeps track of different factors while you are annotating media. These factors include, but are not limited to, number of labeled objects by type, user drawn and generated frame metrics for each user.
FuelAI will expose integrator-focused functionality via an API to allow access to source media and allow integrators to provide data back to FuelAI.
FuelAI offers users the ability to send in application feedback. The feedback could include feature requests/suggestions, defects encountered, or general FuelAI feedback.
Allow Project Managers the ability to customize the workflow options for a specific project.
FuelAI will add the ability to annotate text documents to generate datasets for models that operate on text based input data, such as sentiment analysis and text encoding.
Data Scientists will have the ability to specify a more general label classification for labeled objects that fall below a certain size threshold, generalizing labels that are deemed too small. Additionally, Data Scientists will be able to track occluded objects throughout a video and have this information output as part of dataset generation.
For more information, see the User's Guide