autoEdit 2 Documentation
1.0.6
1.0.6
  • Introduction
  • Overview
    • Intro
      • High-level overview of the parts
      • from 1.0.5 to 1.0.6
      • Project folder structure
    • Support the project
  • Transcriptions
    • Transcriptions intro
    • Transcription json
    • Transcriber
      • audio to video
      • STT sdks
        • IBM Watson STT
        • Gentle STT
        • Pocketsphinx
    • Read metadata
    • Video preview conversion
    • Hypertranscript
  • Papercuts
    • Papercuts
      • Selections
      • Annotations
      • Tags
  • Paper-edit
    • Paper-edit
      • Paper-edit json
      • search-filter
      • drag-and-drop
      • video-preview
  • Export
    • Export
      • EDL export
      • XML export
  • Appendix
    • Dev configuration
    • Deployment/build
      • Deployment/build for Max OS X
      • Deployment / Build for Linux
      • Deployment / Build for Windows
    • Dependencies
    • Current db setup
    • EDL Format
    • Reusable components
    • Prerequisites
    • JQuery and NWJS Packaging
    • Roadmap
      • Paper-editing Roadmap
      • Extra Features Roadmap
      • Future Roadmap
        • Live video editing
        • Social Media Export
        • Translate transcriptions
        • Web app
          • Multi-user collaboration
        • Searchable Editable Archive
        • NLP insights
        • Slack/Chat bot integration
        • Interactive dev tool
        • Phone mms integration with twillio
        • B-roll computational photography
    • Paper-editing Roadmap
    • Testing
    • Updating automated documentation
    • History of autoEdit versions over time
    • ffmpeg and ffprobe in electron
  • Appendix - Data structures
    • IBM Watson json specs
    • Gentle Json transcription specs
    • Pocketsphinx results
    • autoEdit transcription Json
  • QA List
    • QA Intro
    • QA Launch App
    • QA Transcriptions
    • QA Paperedits
    • QA Export
  • Methods
    • Example: Defining Methods
  • Adobe Panel
    • autoEdit Adobe CEP Panel dev setup
    • autoEdit Adobe CEP Panel integration overview
    • Adobe CEP Jsx functions for autoEdit adobe Panel
Powered by GitBook
On this page
  • Project folder structure.
  • build.js
  • docs Project page
  • spec
  • nwjs
  • lib
  • lib/app
  • lib/bin/ffmpeg and ffprobe binaries
  • edl_composer module
  • lib/interactive_transcription_generator
  • srtmodule
  1. Overview
  2. Intro

Project folder structure

Project folder structure.

Going onto subsequent version the project structure is likely to change a bit, but an overview can give you an idea of the the main parts, components and architecture of the application.

As of v1.0.6:

.
├── LICENCE.md
├── README.md
// dmg config info 
├── appdmg.json
├── assets
// background image for dmg
│   ├── background.png
// app icon for os x dock 
│   └── nw.icns
// nwjs builder script destination folder
├── build
//build script to package the app
├── build.js
// cache folder for nwjs version to use in build 
├── cache
// some config helpers 
├── config
│   ├── README.md
│   ├── build.js
│   ├── jsdoc_conf.json
│   └── make_demo.js
├── config.js
// Jeckyll github page site for project page
├── docs
//project
├── lib
//backbone app
│   ├── app
│   │   ├── app.js
│   │   ├── collections
│   │   ├── demo_db.js
│   │   ├── helpers.js
│   │   ├── models
// this is the router for transcriptions
│   │   ├── router.js
// separate router for paper-edits
│   │   ├── router_paperedit.js
//ejs templates
│   │   ├── templates
│   │   └── views
// ffmpeg and ffprobe binaries 
│   ├── bin
│   │   ├── ffmpeg
│   │   └── ffprobe
│   ├── edl_composer
│   │   ├── README.md
│   │   └── index.js
//these modules are run in node context in nwjs
│   ├── interactive_transcription_generator
│   │   ├── README.md
│   │   ├── index.js
│   │   ├── transcriber
│   │   ├── video_metadata_reader
│   │   └── video_to_html5_webm
│   └── srt
│       └── index.js
├── node_modules
// nwjs packaged app
├── nwjs
//app.js is generated by browserify as part of build npm script
│   ├── app.js
│   ├── custom.css
// connect backbone.sync to db. db.js is run in node context in nwjs
│   ├── db.js
//demo data for autoEdit online demo page
│   ├── demo_paperedit.json
//demo data for autoEdit online demo page
│   ├── demo_transcription.json
//entry point for nwjs is index.html
│   ├── index.html
// module that handles the watson credentials
│   └── watson_keys.js
//node package.json file
├── package.json
// tests using jasmine
├── spec
// third party js compiled in browserify
└── vendor
    ├── backbone.async.js
    └── backbone.mousetrap.js

build.js

docs Project page

Documentaiton

spec

nwjs

Is the front end of the project.

// nwjs packaged app
├── nwjs
//app.js is generated by browserify as part of build npm script
│   ├── app.js
│   ├── custom.css
// connect backbone.sync to db. db.js is run in node context in nwjs
│   ├── db.js
//demo data for autoEdit online demo page
│   ├── demo_paperedit.json
//demo data for autoEdit online demo page
│   ├── demo_transcription.json
//entry point for nwjs is index.html
│   ├── index.html
// module that handles the watson credentials
│   └── watson_keys.js
// if require is not undefined then we are in node context in `index.html` , and therefore using 
if (typeof require !== 'undefined') {
    // other code here..
    window.DB = require('./db.js');
    //...

in lib/app/app.js the choice between the demo db and the production db is made.

// Connect up the backend for backbone
if (typeof window.DB !== 'undefined') {
  Backbone.sync = window.DB;
} else {
  Backbone.sync = require('./demo_db');
}

demo_paperedit.json anddemo_transcription.json provide the data for the demo when index.html is run in client side mode in the browser. and lib/app/demo_db.js provides the logic for the demo db.

lib

app contains the backbone project. this is packaged for the client side with browserify.

.
├── lib
//backbone app
│   ├── app
│   │   ├── app.js
│   │   ├── collections
│   │   ├── demo_db.js
│   │   ├── helpers.js
│   │   ├── models
│   │   ├── router.js
│   │   ├── router_paperedit.js
│   │   ├── templates
│   │   └── views
// ffmpeg and ffprobe binaries 
│   ├── bin
│   │   ├── ffmpeg
│   │   └── ffprobe
// module to generate an EDL
│   ├── edl_composer
│   │   ├── README.md
│   │   └── index.js
//these modules are run in node context in nwjs
│   ├── interactive_transcription_generator
│   │   ├── README.md
│   │   ├── index.js
│   │   ├── transcriber
│   │   ├── video_metadata_reader
│   │   └── video_to_html5_webm
// module to compose an srt
│   └── srt
│       └── index.js

lib/app

backbone app. Setup using browserify, and ejs for templating.

lib/bin/ffmpeg and ffprobe binaries

Packaged binaries of ffmpeg and ffprobe inside lib/bin so that the app does not relies on this as a dependency when packaged inside nwjs.

config.js defined the path to where these binaries are stored.

var path = require("path");

module.exports = {
  serverUrl: '',
  appName: 'autoEdit 2',
  ffmpegPath: path.join(process.cwd(),"lib/bin","ffmpeg"),
  ffprobePath: path.join(process.cwd(),"lib/bin","ffprobe"),
};

To access this binaries in the app, we can then do, eg inside lib/interactive_transcription_generator.

var ffmpegPath = require("../../config.js").ffmpegPath;

edl_composer module

lib/interactive_transcription_generator

At a hight level this module:

  • converts the file to an audio file meet the IBM Watson STT Specs, using the submodule transcriber.

  • if greater then 5 min long it splits it into 5 min chunk.

  • It keeps tracks of the time offset of each clip.

  • It sends audio files to IBM STT API.

  • If submitted file was greater then 5 min.

  • When results starts to come back as json after about 5 min or less results are re-interpolated into one json file.

  • The json returned by IBM is converted into json meeting autoEdit2 specs and saved in db.

  • User can now view interactive an transcription.

The transcriber module used by interactive_transcription_generator can also chose between using Gentle open source STT, Pocketsphinx or IBM to generate the transcription depending on what was specified by the user.

interactive_transcription_generator:

.
├── README.md
├── index.js
├── transcriber
│   ├── convert_to_audio.js
│   ├── gentle_stt_node
│   │   ├── gentle_stt.js
│   │   ├── index.js
│   │   └── parse_gentle_stt.js
│   ├── ibm_stt_node
│   │   ├── parse.js
│   │   ├── sam_transcriber_json_convert.js
│   │   ├── send_to_watson.js
│   │   └── write_out.js
│   ├── index.js
│   ├── pocketsphinx
│   │   ├── README.md
│   │   ├── index.js
// pocketsphinx binaries 
│   │   ├── pocketsphinx
│   │   ├── pocketsphinx.js
│   │   ├── pocketsphinx_converter.js
// pocketsphinx binaries 
│   │   ├── sphinxbase
│   │   └── video_to_audio_for_pocketsphinx.js
│   ├── split.js
│   └── trimmer.js
├── video_metadata_reader
│   └── index.js
└── video_to_html5_webm
    └── index.js

The pocketsphinx module was originally extracted from the Video grep project.

The implementation of this module is discussed in more details in subsequent sections.

srtmodule

Previousfrom 1.0.5 to 1.0.6NextSupport the project

Last updated 6 years ago

Deployment script. more info on packaging and .

Contains a jeckyll site for the .

Uses this template for the Landing page

While the

Was initially Using and [docco][docco] for automatic documentation generated. See section for more on how to update and [docco][docco]. But have decided to (which is what you are most likely reading now). And still need to decide what is the place, if any for automated generated documentation.

Test suite npm run test. . Testes are setup to be all in one place rather then divided with their respective components, for ease of use. Altho test coverage is far from complete and could do with some attention, s if that's something you'd be interested getting involved with.

db.js overrides backbone.sync method to provide a backend for the app and persistent storage using linvodb3, which uses medeadown to storing db on the user file system. .

in index.html the window object is used to provide an interface between the

See , and as well as .

After the user uploads a video or audio file the backbone app override default and calls the nwjs/db.js which after saving the transcription model in db, triggers this module to get stt transcription, video preivew, and metadata info.

interactive_transcription_generator. On top of prepping the audio or video file to get a transcription from IBM, it also generates a webm html5 video preview and reads the metadata, which is something needed make an .

is a simplified version of also .

[docco]:

building a new release here
project page, hosted on github pages
pratt-app-landing-page
demo
user manual has been moved on gitbooks
jsdoc
updating documentation
jsdoc
move the documentation gitbooks
Uses jasmine for testing
ee supporting the project
See current db setup tutorial for more info
nwjs mixed contexts
EDL format
Export section
EDL composer module
backbone.sync
EDL
srt module
srt parse composer module
npm
https://jashkenas.github.io/docco/