Visual Servoing Platform version 3.7.0
Loading...
Searching...
No Matches
Rendering a 3D scene with Ogre

1. Introduction

Ogre is a cross-platform 3D rendering library. It is open-source and hosted on github.

This renderer can output a color image, with support for textures and lighting.

It can also be used in the Model-Based Tracker to check the visibility of the object (see Advanced visibility via Ogre3D). There is also the optional OIS library that can be used to animate the rendered object using the keyboard.

2. Ogre installation

2.1. Installation on Ubuntu

  • It is recommended to install Ogre and OIS from the apt packages.

    To install the 1.9 version of Ogre, run:

    $ sudo apt install libogre-1.9-dev libois-dev

    To install the 1.12 version of Ogre, run:

    $ sudo apt install libogre-1.12-dev libois-dev

    Other Ogre packages exist but their compatibility with ViSP has not been tested at the moment.

  • If you want newer versions of Ogre, you can download the sources and compile it manually.

    First, install Ogre dependencies:

    $ sudo apt install libgles2-mesa-dev libvulkan-dev glslang-dev libxrandr-dev libxaw7-dev libx11-dev libzzip-dev libsdl2-dev libois-dev

    Then, get Ogre sources and compile it:

    $ mkdir -p $VISP_WS/3rdparty
    $ cd $VISP_WS/3rdparty
    $ git clone https://github.com/OGRECave/ogre
    $ cd ogre
    $ git checkout $OGRE_VERSION # where OGRE_VERSION is a tag name corresponding to a version of ogre, such as v13.1.0 or v14.0.0
    $ mkdir build && mkdir install && cd build
    $ cmake ../ -DCMAKE_INSTALL_PREFIX=$VISP_WS/3rdparty/ogre/install
    $ make -j$(nproc) install
    Note
    Only the tags v1.12.10 , v13.1.0 and v14.0.0 were tested.

    Finally, to build ViSP with Ogre support, run the following commands:

    $ cd $VISP_WS/visp-build
    $ cmake ../visp -DOGRE_DIR=$VISP_WS/3rdparty/ogre/install/lib/OGRE/cmake/
    $ make -j$(nproc)

2.2. Installation on Windows (MSVC)

  • Download Pre-built SDK for Windows (MSVC). At the time this tutorial was written, we downloaded ogre-sdk-v14.4.1-msvc142-x64.zip file.
  • Unzip the Pre-built SDK in %VISP_WS%\3rdparty\ogre-sdk-v14.4.1-msvc142-x64
  • Set OGRE_DIR environment var with the location of the SDK uding a Command (cmd) line terminal
    C:> setx OGRE_DIR %VISP_WS%\3rdparty\ogre-sdk-v14.4.1-msvc142-x64
  • Modify Path environment variable to add the path to Ogre dlls. In our case we added %OGRE_DIR%\bin.
  • Close all the Command line terminals to ensure that both environment variables are taken into account.

2.3.Installation on other platforms

3.Using Ogre for rendering

An example that shows how to exploit Ogre in ViSP to render a color image with support for textures and lighting in camera space is given in AROgre.cpp.

This example reads a sequence of images, on each image calculates the pose of the camera with respect the a frame located at the center of the 4 small blobs, then using the estimated pose, renders a new image with the image of the sequence in the background, augmented by a grass plate and an animated robot.

To be able to run the program, you need to have downloaded ViSP dataset and correctly made ViSP detect this dataset (following e.g. 5. Install ViSP dataset 5. Install ViSP data set or 5. Install ViSP data set)

3.1 Inheriting from vpAROgre to extend its functionalities.

In order to extend the default behavior, we will write a class vpAROgreExample that inherits from the vpAROgre class. A class inheriting from the vpAROgre class should be instanciated with the camera parameters that must be used to render the scene (which correspond to the real camera in case of an Augmented Reality application). The width and height of the rendering window must also be given.

The vpAROgre::init() method must be called in order to load the resources and create the Ogre scene. It should be called either with a color image or a gray-scale image. You should also indicate if you want to listen to keyboard events (which is possible only if you install OIS library on your computer) and if you want to hide the rendering window.

Note
When using Ogre 14.0.0, the hidden attribute does not seem to work.

The creation and initialization of the vpAROgreExample object can be found below:

// Create a vpAROgre object with color background
vpAROgreExample ogre(mcam, static_cast<unsigned int>(grabber.getWidth()), static_cast<unsigned int>(grabber.getHeight()));
// Initialize it
bool bufferedKeys = false, hidden = false;
ogre.init(IC, bufferedKeys, hidden);

It will call the overriden vpAROgre::createScene() method that creates the different components needed in the scene.

A light must be added in the scene for the rendering. The Ogre API must be used to do so:

// Set lights
mSceneMgr->setAmbientLight(Ogre::ColourValue(static_cast<float>(0.6), static_cast<float>(0.6), static_cast<float>(0.6))); // Default value of lightning
Ogre::Light *light = mSceneMgr->createLight();
light->setDiffuseColour(1.0, 1.0, 1.0); // scaled RGB values
light->setSpecularColour(1.0, 1.0, 1.0); // scaled RGB values
// Lumiere ponctuelle
#if (VISP_HAVE_OGRE_VERSION < (1 << 16 | 10 << 8 | 0))
light->setPosition(-5, -5, 10);
#else
Ogre::SceneNode *spotLightNode = mSceneMgr->getRootSceneNode()->createChildSceneNode();
spotLightNode->attachObject(light);
spotLightNode->setPosition(Ogre::Vector3(-5, -5, 10));
#endif
light->setType(Ogre::Light::LT_POINT);
light->setAttenuation((Ogre::Real)100, (Ogre::Real)1.0, (Ogre::Real)0.045, (Ogre::Real)0.0075);
// Ombres
light->setCastShadows(true);

Additionally, the robot that we want to insert in our AR application must be added to the scene:

// Create the Entity
robot = mSceneMgr->createEntity("Robot", "robot.mesh");
// Attach robot to scene graph
Ogre::SceneNode *RobotNode = mSceneMgr->getRootSceneNode()->createChildSceneNode("Robot");
RobotNode->attachObject(robot);
RobotNode->scale((Ogre::Real)0.001, (Ogre::Real)0.001, (Ogre::Real)0.001);
RobotNode->pitch(Ogre::Degree(90));
RobotNode->yaw(Ogre::Degree(-90));
robot->setCastShadows(true);
mSceneMgr->setShadowTechnique(Ogre::SHADOWTYPE_STENCIL_MODULATIVE);

To make it more fun, let's animate the robot:

// Add an animation
// Set the good animation
mAnimationState = robot->getAnimationState("Idle");
// Start over when finished
mAnimationState->setLoop(true);
// Animation enabled
mAnimationState->setEnabled(true);

To update the animation, the following method must be defined and will be called at the end of each frame:

bool customframeEnded(const Ogre::FrameEvent &evt) VP_OVERRIDE
{
// Update animation
// To move, we add it the time since last frame
mAnimationState->addTime(evt.timeSinceLastFrame);
return true;
}

To make it less weird, we will add a plane on which the robot will stand:

// Add a ground
Ogre::Plane plan;
plan.d = 0;
plan.normal = Ogre::Vector3::UNIT_Z;
Ogre::MeshManager::getSingleton().createPlane("sol", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, plan,
(Ogre::Real)0.22, (Ogre::Real)0.16, 10, 10, true, 1, 1, 1);
Ogre::Entity *ent = mSceneMgr->createEntity("Entitesol", "sol");
Ogre::SceneNode *PlaneNode = mSceneMgr->getRootSceneNode()->createChildSceneNode("Entitesol");
PlaneNode->attachObject(ent);
ent->setMaterialName("Examples/GrassFloor");

Finally, a method to manage keyboard events is written in case of the OIS library is installed:

#ifdef VISP_HAVE_OIS
bool processInputEvent(const Ogre::FrameEvent & /*evt*/)
{
mKeyboard->capture();
Ogre::Matrix3 rotmy;
double angle = -M_PI / 8;
if (mKeyboard->isKeyDown(OIS::KC_ESCAPE))
return false;
// Event telling that we will have to move, setting the animation to
// "walk", if false, annimation goes to "Idle"
bool event = false;
// Check entries
if (mKeyboard->isKeyDown(OIS::KC_Z) || mKeyboard->isKeyDown(OIS::KC_UP)) {
mSceneMgr->getSceneNode("Robot")->setPosition(mSceneMgr->getSceneNode("Robot")->getPosition() +
(Ogre::Real)0.003 * vecDevant);
event = true;
}
if (mKeyboard->isKeyDown(OIS::KC_S) || mKeyboard->isKeyDown(OIS::KC_DOWN)) {
mSceneMgr->getSceneNode("Robot")->setPosition(mSceneMgr->getSceneNode("Robot")->getPosition() -
(Ogre::Real)0.003 * vecDevant);
event = true;
}
if (mKeyboard->isKeyDown(OIS::KC_Q) || mKeyboard->isKeyDown(OIS::KC_LEFT)) {
rotmy = Ogre::Matrix3((Ogre::Real)cos(-angle), (Ogre::Real)sin(-angle), 0, (Ogre::Real)(-sin(-angle)),
(Ogre::Real)cos(-angle), 0, 0, 0, 1);
vecDevant = vecDevant * rotmy;
mSceneMgr->getSceneNode("Robot")->yaw(Ogre::Radian((Ogre::Real)(-angle)));
event = true;
}
if (mKeyboard->isKeyDown(OIS::KC_D) || mKeyboard->isKeyDown(OIS::KC_RIGHT)) {
rotmy = Ogre::Matrix3((Ogre::Real)cos(angle), (Ogre::Real)sin(angle), 0, (Ogre::Real)(-sin(angle)),
(Ogre::Real)cos(angle), 0, 0, 0, 1);
vecDevant = vecDevant * rotmy;
mSceneMgr->getSceneNode("Robot")->yaw(Ogre::Radian((Ogre::Real)angle));
event = true;
}
// Play the right animation
if (event) {
mAnimationState = robot->getAnimationState("Walk");
}
else
mAnimationState = robot->getAnimationState("Idle");
// Start over when finished
mAnimationState->setLoop(true);
// Animation enabled
mAnimationState->setEnabled(true);
return true;
}
#endif

3.2. Rendering loop

The rendering loop is given below:

while (ogre.continueRendering() && !grabber.end() && !quit) {
// Acquire a frame
grabber.acquire(IC);
// Convert it to a grey level image for tracking purpose
// kill the point list
mPose.clearPoint();
// track the dot
for (int i = 0; i < 4; ++i) {
// track the point
md[i].track(I, mcog[i]);
// pixel->meter conversion
{
double x = 0, y = 0;
mP[i].set_x(x);
mP[i].set_y(y);
}
// and added to the pose computation point list
mPose.addPoint(mP[i]);
}
// the pose structure has been updated
// the pose is now updated using the virtual visual servoing approach
// Dementhon or lagrange is no longer necessary, pose at the
// previous iteration is sufficient
// Display with ogre
ogre.display(IC, cMo);
// Wait so that the video does not go too fast
// Measure loop fps
std::cout << "\r> " << 1000 / (t1 - t0) << " fps";
t0 = t1;
}

First, the image is acquired and converted in gray-scale format:

// Acquire a frame
grabber.acquire(IC);
// Convert it to a grey level image for tracking purpose

Then, the pose of the object in the image is computed from the tracked dots:

// kill the point list
mPose.clearPoint();
// track the dot
for (int i = 0; i < 4; ++i) {
// track the point
md[i].track(I, mcog[i]);
// pixel->meter conversion
{
double x = 0, y = 0;
mP[i].set_x(x);
mP[i].set_y(y);
}
// and added to the pose computation point list
mPose.addPoint(mP[i]);
}
// the pose structure has been updated
// the pose is now updated using the virtual visual servoing approach
// Dementhon or lagrange is no longer necessary, pose at the
// previous iteration is sufficient

This pose is then used by the renderer in order to display the virtual object at the correct location in the image:

// Display with ogre
ogre.display(IC, cMo);

4. Tutorial full code

The full code of AROgre.cpp is given below.

/*
* ViSP, open source Visual Servoing Platform software.
* Copyright (C) 2005 - 2025 by Inria. All rights reserved.
*
* This software is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
* (at your option) any later version.
* See the file LICENSE.txt at the root directory of this source
* distribution for additional information about the GNU GPL.
*
* For using ViSP with software that can not be combined with the GNU
* GPL, please contact Inria about acquiring a ViSP Professional
* Edition License.
*
* See https://visp.inria.fr for more information.
*
* This software was developed at:
* Inria Rennes - Bretagne Atlantique
* Campus Universitaire de Beaulieu
* 35042 Rennes Cedex
* France
*
* If you have questions regarding the use of this file, please contact
* Inria at visp@inria.fr
*
* This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
* WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
*
* Description:
* Implementation of a simple augmented reality application using the vpAROgre
* class.
*/
#include <iostream>
#include <visp3/core/vpConfig.h>
#if defined(VISP_HAVE_OGRE) && defined(VISP_HAVE_DISPLAY)
#include <visp3/ar/vpAROgre.h>
#include <visp3/blob/vpDot2.h>
#include <visp3/core/vpDebug.h>
#include <visp3/core/vpImagePoint.h>
#include <visp3/core/vpIoTools.h>
#include <visp3/core/vpPixelMeterConversion.h>
#include <visp3/core/vpPoint.h>
#include <visp3/gui/vpDisplayFactory.h>
#include <visp3/io/vpParseArgv.h>
#include <visp3/io/vpVideoReader.h>
#include <visp3/vision/vpPose.h>
// List of allowed command line options
#define GETOPTARGS "ci:p:h"
#ifdef ENABLE_VISP_NAMESPACE
using namespace VISP_NAMESPACE_NAME;
#endif
void usage(const char *name, const char *badparam, const std::string &ipath, const std::string &ppath)
{
#if defined(VISP_HAVE_DATASET)
#if VISP_HAVE_DATASET_VERSION >= 0x030600
std::string ext("png");
#else
std::string ext("pgm");
#endif
#else
// We suppose that the user will download a recent dataset
std::string ext("png");
#endif
fprintf(stdout, "\n\
Test augmented reality using the vpAROgre class.\n\
\n\
SYNOPSIS\n\
%s [-i <test image path>] [-p <personal image path>]\n\
[-c] [-h]\n", name);
fprintf(stdout, "\n\
OPTIONS: Default\n\
-i <input image path> %s\n\
Set image input path.\n\
From this path read images \n\
\"mire-2/image.%%04d.%s\". These \n\
images come from visp-images-x.y.z.tar.gz available \n\
on the ViSP website.\n\
Setting the VISP_INPUT_IMAGE_PATH environment\n\
variable produces the same behaviour than using\n\
this option.\n\
\n\
-p <personal image path> %s\n\
Specify a personal sequence containing images \n\
to process.\n\
By image sequence, we mean one file per image.\n\
Example : \"/Temp/visp-images/cube/image.%%04d.%s\"\n\
%%04d is for the image numbering.\n\
\n\
-c\n\
Disable the mouse click. Useful to automate the \n\
execution of this program without human intervention.\n\
\n\
-h\n\
Print the help.\n",
ipath.c_str(), ext.c_str(), ppath.c_str(), ext.c_str());
if (badparam)
fprintf(stdout, "\nERROR: Bad parameter [%s]\n", badparam);
}
bool getOptions(int argc, const char **argv, std::string &ipath, std::string &ppath, bool &click_allowed)
{
const char *optarg_;
int c;
while ((c = vpParseArgv::parse(argc, argv, GETOPTARGS, &optarg_)) > 1) {
switch (c) {
case 'c':
click_allowed = false;
break;
case 'i':
ipath = optarg_;
break;
case 'p':
ppath = optarg_;
break;
case 'h':
usage(argv[0], nullptr, ipath, ppath);
return false;
default:
usage(argv[0], optarg_, ipath, ppath);
return false;
}
}
if ((c == 1) || (c == -1)) {
// standalone param or error
usage(argv[0], nullptr, ipath, ppath);
std::cerr << "ERROR: " << std::endl;
std::cerr << " Bad argument " << optarg_ << std::endl << std::endl;
return false;
}
return true;
}
#ifndef DOXYGEN_SHOULD_SKIP_THIS
class vpAROgreExample : public vpAROgre
{
public:
// The constructor doesn't change here
vpAROgreExample(const vpCameraParameters &cam = vpCameraParameters(), unsigned int width = 640,
unsigned int height = 480, const char *resourcePath = nullptr)
: vpAROgre(cam, width, height)
{
// Direction vectors
if (resourcePath)
mResourcePath = resourcePath;
std::cout << "mResourcePath: " << mResourcePath << std::endl;
vecDevant = Ogre::Vector3(0, -1, 0);
robot = nullptr;
mAnimationState = nullptr;
}
protected:
// Attributes
// Vector to move
Ogre::Vector3 vecDevant;
// Animation attribute
Ogre::AnimationState *mAnimationState;
// The entity representing the robot
Ogre::Entity *robot;
// Our scene will just be a plane
void createScene() VP_OVERRIDE
{
// Set lights
mSceneMgr->setAmbientLight(Ogre::ColourValue(static_cast<float>(0.6), static_cast<float>(0.6), static_cast<float>(0.6))); // Default value of lightning
Ogre::Light *light = mSceneMgr->createLight();
light->setDiffuseColour(1.0, 1.0, 1.0); // scaled RGB values
light->setSpecularColour(1.0, 1.0, 1.0); // scaled RGB values
// Lumiere ponctuelle
#if (VISP_HAVE_OGRE_VERSION < (1 << 16 | 10 << 8 | 0))
light->setPosition(-5, -5, 10);
#else
Ogre::SceneNode *spotLightNode = mSceneMgr->getRootSceneNode()->createChildSceneNode();
spotLightNode->attachObject(light);
spotLightNode->setPosition(Ogre::Vector3(-5, -5, 10));
#endif
light->setType(Ogre::Light::LT_POINT);
light->setAttenuation((Ogre::Real)100, (Ogre::Real)1.0, (Ogre::Real)0.045, (Ogre::Real)0.0075);
// Ombres
light->setCastShadows(true);
// Create the Entity
robot = mSceneMgr->createEntity("Robot", "robot.mesh");
// Attach robot to scene graph
Ogre::SceneNode *RobotNode = mSceneMgr->getRootSceneNode()->createChildSceneNode("Robot");
RobotNode->attachObject(robot);
RobotNode->scale((Ogre::Real)0.001, (Ogre::Real)0.001, (Ogre::Real)0.001);
RobotNode->pitch(Ogre::Degree(90));
RobotNode->yaw(Ogre::Degree(-90));
robot->setCastShadows(true);
mSceneMgr->setShadowTechnique(Ogre::SHADOWTYPE_STENCIL_MODULATIVE);
// Add an animation
// Set the good animation
mAnimationState = robot->getAnimationState("Idle");
// Start over when finished
mAnimationState->setLoop(true);
// Animation enabled
mAnimationState->setEnabled(true);
// Add a ground
Ogre::Plane plan;
plan.d = 0;
plan.normal = Ogre::Vector3::UNIT_Z;
Ogre::MeshManager::getSingleton().createPlane("sol", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, plan,
(Ogre::Real)0.22, (Ogre::Real)0.16, 10, 10, true, 1, 1, 1);
Ogre::Entity *ent = mSceneMgr->createEntity("Entitesol", "sol");
Ogre::SceneNode *PlaneNode = mSceneMgr->getRootSceneNode()->createChildSceneNode("Entitesol");
PlaneNode->attachObject(ent);
ent->setMaterialName("Examples/GrassFloor");
}
bool customframeEnded(const Ogre::FrameEvent &evt) VP_OVERRIDE
{
// Update animation
// To move, we add it the time since last frame
mAnimationState->addTime(evt.timeSinceLastFrame);
return true;
}
#ifdef VISP_HAVE_OIS
bool processInputEvent(const Ogre::FrameEvent & /*evt*/)
{
mKeyboard->capture();
Ogre::Matrix3 rotmy;
double angle = -M_PI / 8;
if (mKeyboard->isKeyDown(OIS::KC_ESCAPE))
return false;
// Event telling that we will have to move, setting the animation to
// "walk", if false, annimation goes to "Idle"
bool event = false;
// Check entries
if (mKeyboard->isKeyDown(OIS::KC_Z) || mKeyboard->isKeyDown(OIS::KC_UP)) {
mSceneMgr->getSceneNode("Robot")->setPosition(mSceneMgr->getSceneNode("Robot")->getPosition() +
(Ogre::Real)0.003 * vecDevant);
event = true;
}
if (mKeyboard->isKeyDown(OIS::KC_S) || mKeyboard->isKeyDown(OIS::KC_DOWN)) {
mSceneMgr->getSceneNode("Robot")->setPosition(mSceneMgr->getSceneNode("Robot")->getPosition() -
(Ogre::Real)0.003 * vecDevant);
event = true;
}
if (mKeyboard->isKeyDown(OIS::KC_Q) || mKeyboard->isKeyDown(OIS::KC_LEFT)) {
rotmy = Ogre::Matrix3((Ogre::Real)cos(-angle), (Ogre::Real)sin(-angle), 0, (Ogre::Real)(-sin(-angle)),
(Ogre::Real)cos(-angle), 0, 0, 0, 1);
vecDevant = vecDevant * rotmy;
mSceneMgr->getSceneNode("Robot")->yaw(Ogre::Radian((Ogre::Real)(-angle)));
event = true;
}
if (mKeyboard->isKeyDown(OIS::KC_D) || mKeyboard->isKeyDown(OIS::KC_RIGHT)) {
rotmy = Ogre::Matrix3((Ogre::Real)cos(angle), (Ogre::Real)sin(angle), 0, (Ogre::Real)(-sin(angle)),
(Ogre::Real)cos(angle), 0, 0, 0, 1);
vecDevant = vecDevant * rotmy;
mSceneMgr->getSceneNode("Robot")->yaw(Ogre::Radian((Ogre::Real)angle));
event = true;
}
// Play the right animation
if (event) {
mAnimationState = robot->getAnimationState("Walk");
}
else
mAnimationState = robot->getAnimationState("Idle");
// Start over when finished
mAnimationState->setLoop(true);
// Animation enabled
mAnimationState->setEnabled(true);
return true;
}
#endif
};
void computeInitialPose(vpCameraParameters *mcam, vpImage<unsigned char> &I, vpPose *mPose, vpDot2 *md,
vpImagePoint *mcog, vpHomogeneousMatrix *cMo, vpPoint *mP, const bool &opt_click_allowed)
{
// ---------------------------------------------------
// Code inspired from ViSP example of camera pose
// ----------------------------------------------------
bool opt_display = true;
vpDisplay *display = nullptr;
if (opt_display) {
#if defined(VISP_HAVE_DISPLAY)
#else
opt_display = false; // No display is available
#endif
}
for (unsigned int i = 0; i < 4; ++i) {
if (opt_display) {
md[i].setGraphics(true);
}
else {
md[i].setGraphics(false);
}
}
if (opt_display) {
// Display size is automatically defined by the image (I) size
display->init(I, 100, 100, "Preliminary Pose Calculation");
// display the image
// The image class has a member that specify a pointer toward
// the display that has been initialized in the display declaration
// therefore is is no longer necessary to make a reference to the
// display variable.
// Flush the display
}
std::cout << "**"<< std::endl;
std::cout << "** Preliminary Pose Calculation" << std::endl;
std::cout << "** Click on the 4 dots" << std::endl;
std::cout << "** Dot1: (-x,-y,0), Dot2: (x,-y,0), Dot3: (x,y,0), Dot4: (-x,y,0)" << std::endl;
std::cout << "**" << std::endl;
vpImagePoint ip[4];
if (!opt_click_allowed) {
ip[0].set_i(265);
ip[0].set_j(93);
ip[1].set_i(248);
ip[1].set_j(242);
ip[2].set_i(166);
ip[2].set_j(215);
ip[3].set_i(178);
ip[3].set_j(85);
}
for (unsigned int i = 0; i < 4; ++i) {
// by using setGraphics, we request to see the edges of the dot
// in red on the screen.
// It uses the overlay image plane.
// The default of this setting is that it is time consuming
md[i].setGraphics(true);
md[i].setSizePrecision(0.5);
for (unsigned int j = 0; j < i; j++)
md[j].display(I);
// flush the display buffer
try {
if (opt_click_allowed)
md[i].initTracking(I);
else
md[i].initTracking(I, ip[i]);
}
catch (...) {
}
mcog[i] = md[i].getCog();
// an exception is thrown by the track method if
// - dot is lost
// - the number of pixel is too small
// - too many pixels are detected (this is usual when a "big" specularity
// occurs. The threshold can be modified using the setNbMaxPoint(int) method
if (opt_display) {
md[i].display(I);
// flush the display buffer
}
}
if (opt_display) {
// display a red cross (size 10) in the image at the dot center
// of gravity location
//
// WARNING
// in the vpDisplay class member's when pixel coordinates
// are considered the first element is the row index and the second
// is the column index:
// vpDisplay::displayCross(Image, row index, column index, size, color)
// therefore u and v are inverted wrt to the vpDot specification
// Alternatively, to avoid this problem another set of member have
// been defined in the vpDisplay class.
// If the method name is postfixe with _uv the specification is :
// vpDisplay::displayCross_uv(Image, column index, row index, size,
// color)
for (unsigned int i = 0; i < 4; ++i) {
}
// flush the X11 buffer
}
// --------------------------------------------------------
// Now we will compute the pose
// --------------------------------------------------------
// the list of point is cleared (if that's not done before)
mPose->clearPoint();
// we set the 3D points coordinates (in meter !) in the object/world frame
double l = 0.06;
double L = 0.07;
mP[0].setWorldCoordinates(-L, -l, 0); // (X,Y,Z)
mP[1].setWorldCoordinates(+L, -l, 0);
mP[2].setWorldCoordinates(+L, +l, 0);
mP[3].setWorldCoordinates(-L, +l, 0);
// pixel-> meter conversion
for (unsigned int i = 0; i < 4; ++i) {
// u[i]. v[i] are expressed in pixel
// conversion in meter is achieved using
// x = (u-u0)/px
// y = (v-v0)/py
// where px, py, u0, v0 are the intrinsic camera parameters
double x = 0, y = 0;
vpPixelMeterConversion::convertPoint(*mcam, mcog[i], x, y);
mP[i].set_x(x);
mP[i].set_y(y);
}
// The pose structure is build, we put in the point list the set of point
// here both 2D and 3D world coordinates are known
for (unsigned int i = 0; i < 4; ++i) {
mPose->addPoint(mP[i]); // and added to the pose computation point list
}
// compute the initial pose using Dementhon method followed by a non linear
// minimization method
// Compute initial pose
// Display briefly just to have a glimpse a the ViSP pose
if (opt_display) {
// Display the computed pose
mPose->display(I, *cMo, *mcam, 0.05, vpColor::red);
}
}
#endif
int main(int argc, const char **argv)
{
#if defined(VISP_HAVE_DATASET)
#if VISP_HAVE_DATASET_VERSION >= 0x030600
std::string ext("png");
#else
std::string ext("pgm");
#endif
#else
// We suppose that the user will download a recent dataset
std::string ext("png");
#endif
try {
std::string env_ipath;
std::string opt_ipath;
std::string ipath;
std::string opt_ppath;
std::string dirname;
std::string filename;
bool opt_click_allowed = true;
// Get the visp-images-data package path or VISP_INPUT_IMAGE_PATH
// environment variable value
// Set the default input path
if (!env_ipath.empty())
ipath = env_ipath;
// Read the command line options
if (getOptions(argc, argv, opt_ipath, opt_ppath, opt_click_allowed) == false) {
return EXIT_FAILURE;
}
// Get the option values
if (!opt_ipath.empty())
ipath = opt_ipath;
// Compare ipath and env_ipath. If they differ, we take into account
// the input path coming from the command line option
if (!opt_ipath.empty() && !env_ipath.empty() && opt_ppath.empty()) {
if (ipath != env_ipath) {
std::cout << std::endl << "WARNING: " << std::endl;
std::cout << " Since -i <visp image path=" << ipath << "> "
<< " is different from VISP_IMAGE_PATH=" << env_ipath << std::endl
<< " we skip the environment variable." << std::endl;
}
}
// Test if an input path is set
if (opt_ipath.empty() && env_ipath.empty() && opt_ppath.empty()) {
usage(argv[0], nullptr, ipath, opt_ppath);
std::cerr << std::endl << "ERROR:" << std::endl;
std::cerr << " Use -i <visp image path> option or set VISP_INPUT_IMAGE_PATH " << std::endl
<< " environment variable to specify the location of the " << std::endl
<< " image path where test images are located." << std::endl
<< " Use -p <personal image path> option if you want to " << std::endl
<< " use personal images." << std::endl
<< std::endl;
return EXIT_FAILURE;
}
std::ostringstream s;
if (opt_ppath.empty()) {
// Set the path location of the image sequence
dirname = vpIoTools::createFilePath(ipath, "mire-2");
// Build the name of the image file
s.setf(std::ios::right, std::ios::adjustfield);
s << "image.%04d.";
s << ext;
}
else {
filename = opt_ppath;
}
// We will read a sequence of images
vpVideoReader grabber;
grabber.setFirstFrameIndex(1);
grabber.setFileName(filename.c_str());
// Grey level image associated to a display in the initial pose
// computation
// Grey level image to track points
// RGBa image to get background
// Matrix representing camera parameters
// Variables used for pose computation purposes
vpPose mPose;
vpDot2 md[4];
vpImagePoint mcog[4];
vpPoint mP[4];
// CameraParameters we got from calibration
// Keep u0 and v0 as center of the screen
try {
std::cout << "Load: " << filename << std::endl;
grabber.open(Idisplay);
grabber.acquire(Idisplay);
vpCameraParameters mcamTmp(592, 570, grabber.getWidth() / 2, grabber.getHeight() / 2);
// Compute the initial pose of the camera
computeInitialPose(&mcamTmp, Idisplay, &mPose, md, mcog, &cMo, mP, opt_click_allowed);
// Close the framegrabber
grabber.close();
// Associate the grabber to the RGBa image
grabber.open(IC);
mcam.init(mcamTmp);
}
catch (...) {
std::cerr << std::endl << "ERROR:" << std::endl;
std::cerr << " Cannot read " << filename << std::endl;
std::cerr << " Check your -i " << ipath << " option " << std::endl
<< " or VISP_INPUT_IMAGE_PATH environment variable." << std::endl;
return EXIT_FAILURE;
}
// Create a vpAROgre object with color background
vpAROgreExample ogre(mcam, static_cast<unsigned int>(grabber.getWidth()), static_cast<unsigned int>(grabber.getHeight()));
// Initialize it
bool bufferedKeys = false, hidden = false;
ogre.init(IC, bufferedKeys, hidden);
double t0 = vpTime::measureTimeMs();
bool quit = false;
while (ogre.continueRendering() && !grabber.end() && !quit) {
// Acquire a frame
grabber.acquire(IC);
// Convert it to a grey level image for tracking purpose
// kill the point list
mPose.clearPoint();
// track the dot
for (int i = 0; i < 4; ++i) {
// track the point
md[i].track(I, mcog[i]);
// pixel->meter conversion
{
double x = 0, y = 0;
mP[i].set_x(x);
mP[i].set_y(y);
}
// and added to the pose computation point list
mPose.addPoint(mP[i]);
}
// the pose structure has been updated
// the pose is now updated using the virtual visual servoing approach
// Dementhon or lagrange is no longer necessary, pose at the
// previous iteration is sufficient
// Display with ogre
ogre.display(IC, cMo);
// Wait so that the video does not go too fast
// Measure loop fps
std::cout << "\r> " << 1000 / (t1 - t0) << " fps";
t0 = t1;
}
// Close the grabber
grabber.close();
return EXIT_SUCCESS;
}
catch (const vpException &e) {
std::cout << "Catch a ViSP exception: " << e << std::endl;
return EXIT_FAILURE;
}
catch (Ogre::Exception &e) {
std::cout << "Catch an Ogre exception: " << e.getDescription() << std::endl;
return EXIT_FAILURE;
}
catch (...) {
std::cout << "Catch an exception " << std::endl;
return EXIT_FAILURE;
}
}
#else // VISP_HAVE_OGRE && VISP_HAVE_DISPLAY
int main()
{
#if (!(defined(VISP_HAVE_X11) || defined(VISP_HAVE_GTK) || defined(VISP_HAVE_GDI)))
std::cout << "You do not have X11, or GTK, or GDI (Graphical Device Interface) functionalities to display images..."
<< std::endl;
std::cout << "Tip if you are on a unix-like system:" << std::endl;
std::cout << "- Install X11, configure again ViSP using cmake and build again this example" << std::endl;
std::cout << "Tip if you are on a windows-like system:" << std::endl;
std::cout << "- Install GDI, configure again ViSP using cmake and build again this example" << std::endl;
#else
std::cout << "You do not have Ogre functionalities" << std::endl;
std::cout << "Tip:" << std::endl;
std::cout << "- Install Ogre3D, configure again ViSP using cmake and build again this example" << std::endl;
#endif
return EXIT_SUCCESS;
}
#endif
Implementation of an augmented reality viewer using Ogre3D 3rd party.
Definition vpAROgre.h:121
virtual bool customframeEnded(const Ogre::FrameEvent &evt)
Definition vpAROgre.cpp:741
virtual bool processInputEvent(const Ogre::FrameEvent &)
Definition vpAROgre.h:344
virtual void createScene(void)
Definition vpAROgre.h:328
Generic class defining intrinsic camera parameters.
void init()
Basic initialization with the default parameters.
static const vpColor red
Definition vpColor.h:198
Class that defines generic functionalities for display.
Definition vpDisplay.h:171
static void display(const vpImage< unsigned char > &I)
static void displayCross(const vpImage< unsigned char > &I, const vpImagePoint &ip, unsigned int size, const vpColor &color, unsigned int thickness=1)
static void flush(const vpImage< unsigned char > &I)
This tracker is meant to track a blob (connex pixels with same gray level) on a vpImage.
Definition vpDot2.h:127
void track(const vpImage< unsigned char > &I, bool canMakeTheWindowGrow=true)
Definition vpDot2.cpp:441
void setGraphics(bool activate)
Definition vpDot2.h:320
void display(const vpImage< unsigned char > &I, vpColor color=vpColor::red, unsigned int thickness=1) const
Definition vpDot2.cpp:219
void setSizePrecision(const double &sizePrecision)
Definition vpDot2.cpp:745
void setGrayLevelPrecision(const double &grayLevelPrecision)
Definition vpDot2.cpp:715
vpImagePoint getCog() const
Definition vpDot2.h:183
void initTracking(const vpImage< unsigned char > &I, unsigned int size=0)
Definition vpDot2.cpp:263
error that can be emitted by ViSP classes.
Definition vpException.h:60
unsigned int getWidth() const
Return the number of columns in the image.
unsigned int getHeight() const
Return the number of rows in the image.
Implementation of an homogeneous matrix and operations on such kind of matrices.
static void convert(const vpImage< unsigned char > &src, vpImage< vpRGBa > &dest)
Class that defines a 2D point in an image. This class is useful for image processing and stores only ...
void set_j(double jj)
void set_i(double ii)
Definition of the vpImage class member functions.
Definition vpImage.h:131
static std::string getViSPImagesDataPath()
static std::string createFilePath(const std::string &parent, const std::string &child)
static bool parse(int *argcPtr, const char **argv, vpArgvInfo *argTable, int flags)
static void convertPoint(const vpCameraParameters &cam, const double &u, const double &v, double &x, double &y)
Class that defines a 3D point in the object frame and allows forward projection of a 3D point in the ...
Definition vpPoint.h:79
void set_x(double x)
Set the point x coordinate in the image plane.
Definition vpPoint.cpp:471
void setWorldCoordinates(double oX, double oY, double oZ)
Definition vpPoint.cpp:116
void set_y(double y)
Set the point y coordinate in the image plane.
Definition vpPoint.cpp:473
Class used for pose computation from N points (pose from point only). Some of the algorithms implemen...
Definition vpPose.h:82
void addPoint(const vpPoint &P)
Definition vpPose.cpp:96
@ DEMENTHON_LAGRANGE_VIRTUAL_VS
Definition vpPose.h:103
@ VIRTUAL_VS
Definition vpPose.h:97
bool computePose(vpPoseMethodType method, vpHomogeneousMatrix &cMo, FuncCheckValidityPose func=nullptr)
Definition vpPose.cpp:385
void clearPoint()
Definition vpPose.cpp:89
static void display(vpImage< unsigned char > &I, vpHomogeneousMatrix &cMo, vpCameraParameters &cam, double size, vpColor col=vpColor::none)
Definition vpPose.cpp:567
Class that enables to manipulate easily a video file or a sequence of images. As it inherits from the...
void open(vpImage< vpRGBa > &I) VP_OVERRIDE
void setFileName(const std::string &filename)
void setFirstFrameIndex(const long first_frame)
void close() VP_OVERRIDE
void acquire(vpImage< vpRGBa > &I) VP_OVERRIDE
vpDisplay * allocateDisplay()
Return a newly allocated vpDisplay specialization if a GUI library is available or nullptr otherwise.
VISP_EXPORT double measureTimeMs()
VISP_EXPORT int wait(double t0, double t)

5.Running the tutorial

Once ViSP is built, you may run the tutorial by:

$ cd $VISP_WS/visp-build/example/ogre-simulator
$ ./AROgre -c

The -c option permits not to click on the dots of the marker when initializing the sequence.

When this -c option is not used, you have to click counter clockwise on the 4 small white blobs like in the following image:

Location of the points to click to initialize AROgre example.

A window will appear to select the value of the Ogre settings, such as should we display in Full Screen mode or the Video Mode that specifies the size of the window that will appear to render the sceen.

Ogre settings window.

Once settings are accepted, you should see a virtual robot moving on a grass floor. When OIS third-party is enabled, this robot can be animated using keyboard arrows.

Robot added in augmented reality.

6. Known issues

6.1. Texture not displayed

When running the example, the texture of the robot can not be displayed properly. Changing the rendering subsystem should fix the issue.

6.2. Problem of display with OpenGL ES renderer

When using OpenGL ES renderer, the window might appear entirely black. Chosing another renderer subsystem like OpenGL Rendering SubSystem should fix the issue.