MS Open Tech Fri, 31 Jul 2015 16:52:40 +0000 en-US hourly 1 http://wordpress.org/?v=4.2.2 ANGLE Available Through Visual Studio OpenGLES2 Templates /blog/2015/07/31/angle-available-through-visual-studio-opengles2-templates/ /blog/2015/07/31/angle-available-through-visual-studio-opengles2-templates/#comments Fri, 31 Jul 2015 16:44:17 +0000 https://msopentech.com/?p=892851 Read More]]> logoangle Using ANGLE keeps getting easier! The Visual Studio team has taken the Visual Studio project templates found in our GitHub site and integrated them into their new Visual C++ OpenGLES2 Project Template, available for download now. The template is designed to create Visual Studio 2015 projects for multiple platforms using the same OpenGL ES 2.0 rendering code. Your Android, iOS, and Windows graphics code can stay the same for all variations of your app within your solution.

The greatest addition in this template is the automatic connection to the ANGLE for Windows Store NuGet package. When creating a new project, the template will download the latest version of ANGLE from NuGet. As ANGLE improves via new features or fixes, you can opt to update your existing projects sourced from this template to the latest ANGLE version.

It’s exciting to be a part of this evolving cross-platform world! We’re anxious to see how you’ll use ANGLE and ways that your experience can be improved. Let us know your thoughts in the issues section of our GitHub site.

Cheers!

Tony Balogh

Senior Program Manager

]]>
/blog/2015/07/31/angle-available-through-visual-studio-opengles2-templates/feed/ 0
OpenCV HighGui module available for WinRT /blog/2015/07/10/opencv-highgui-module-available-for-winrt/ /blog/2015/07/10/opencv-highgui-module-available-for-winrt/#comments Fri, 10 Jul 2015 19:24:49 +0000 https://msopentech.com/?p=892691 Read More]]> clip_image002[12]MS Open Tech has just made a new contribution to the OpenCV open source project. It introduces support for Modern Windows of most of the API surface of the highgui module, which is used for quick UI prototyping. The only functionality that was not covered by this contribution was keyboard and mouse events tracking.

We are especially happy to share this code with the community since highgui is particularly valuable to OpenCV developers who are in the early phases of a new project, and want to experiment with different visual effects and image processing algorithms. Being able to leverage a cross-platform API like highgui that allows them to stand up functional, cross platform code quickly is a boon to productivity. That was noted in feedback to some of our previous announcements about OpenCV. As you can tell, we have been listening!!

 

Getting started

With the highgui module, you can use quick preview and interaction APIs likenamedWindow(),imshow(), createTrackbar() etc. from Windows Runtime 8.1+ applications. 

 

In the example below, we create a relatively small sample layout (MainPage.xaml) with a stack panel and a button:

 

<Page x:Class="FaceDetection.MainPage" 

      xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 

      xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 

      xmlns:d="http://schemas.microsoft.com/expression/blend/2008" 

      xmlns:local="using:FaceDetection" 

      xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 

      mc:Ignorable="d"> 

  

    <Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}"> 

        <Button x:Name="ProcessBtn" 

                Width="218" 

                Height="67" 

                Margin="69,81,0,0" 

                HorizontalAlignment="Left" 

                VerticalAlignment="Top" 

                Click="processBtn_Click" 

                Content="Initialize" /> 

        <StackPanel x:Name="cvContainer" 

                    Width="800" 

                    Height="450" 

                    Margin="360,85,0,0" 

                    HorizontalAlignment="Left" 

                    VerticalAlignment="Top" /> 

  

    </Grid> 

</Page> 

 

The cvContainer stackpanel will be used as a component to hold the highgui-generated UI. This code shows an image from the Assets folder, and a couple of track bars:

 

void HighguiSample::MainPage::processBtn_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e) 

{ 

// creating name that can be passed to a slider callback 

static cv::String windowName("sample"); 

 

// Assigning a container that will be used to hold highgui windows: display image and trackbars 

cv::winrt_initContainer(this->cvContainer); 

 

// Creating a windows w/o displaying it. This step is actually not required but provided for reference 

cv::namedWindow(windowName); 

 

// Reading image from Assets 

cv::Mat image = cv::imread("Assets/sample.jpg"); 

 

// Converting image to default Window format so it can be properly displayed 

cv::Mat converted = cv::Mat(image.rows, image.cols, CV_8UC4); 

cvtColor(image, converted, CV_BGR2BGRA); 

 

// Showing the image in a window (that will be placed within a container referenced above) 

// It’ll create window if it hasn't been created before via namedWindow 

cv::imshow(windowName, converted);  

 

// Random but ultimate value :) 

int state = 42; 

 

// Creating callbacks to be used for trackbars 

cv::TrackbarCallback callback = [](int pos, void* userdata) 

{ 

    if (pos == 0) { 

  // This one will destroy selected window if we scroll it down to zero value 

  // We have a single window in this example and currently only one window can be displayed 

  // but you can have several windows pre-created and switch between displaying either of them 

        cv::destroyWindow(windowName); 

    } 

}; 

cv::TrackbarCallback callbackTwin = [](int pos, void* userdata) 

{ 

    if (pos >= 70) { 

  // This one will all windows if we scroll it higher up to provided value 

        cv::destroyAllWindows(); 

    } 

}; 

 

// Create two trackbars with callbacks created earlier. Trackbars are differentiated by names. 

cv::createTrackbar("Sample trackbar", windowName, &state, 100, callback); 

cv::createTrackbar("Twin brother", windowName, &state, 100, callbackTwin); 

} 

 

Working sample extended with face detection and the two trackbars is available here. 

clip_image003[4]

 

This work completes a series of contributions to OpenCV by Microsoft Open Technologies to enable first-class support of Modern Windows, including the upcoming Windows 10. We look forward to seeing community contributions that will further enhance this work!

 

Until then, we hope that you enjoy using OpenCV in Visual Studio. The Windows Store is waiting for your great apps! J Happy coding!

 

Adalberto Foresti
Principal Program Manager
Microsoft Open Technologies, Inc

Eric Mittelette
Senior Technical Evangelist
Microsoft Open Technologies, Inc

]]>
/blog/2015/07/10/opencv-highgui-module-available-for-winrt/feed/ 0
Office 365 Open Source plugins for Moodle: getting better all the time /blog/2015/06/26/office-365-moodle-getting-better/ /blog/2015/06/26/office-365-moodle-getting-better/#comments Fri, 26 Jun 2015 23:03:43 +0000 https://msopentech.com/?p=892491 Read More]]> Earlier today we shared the news that the upcoming Cypress release of Open edX, the most popular open source MOOC (massive open online course), will include new features for tighter integration with Office 365. Those features are the result of our open source collaboration with members of the Open edX community.

In addition to the new work we’re doing with Open edX, we continue to work with Remote-Learner (a leading Moodle partner) to make improvements and additions to the open source Office 365 plugins for Moodle. Moodle is the most popular open source learning management system (LMS), and the Office 365 plugins were released in January of this year. In this post, we’d like to share a few details about the great work Remote-Learner is doing to evolve the plugins.

Evolving plugins to keep up with Office 365 and Moodle

Many of the changes over the last few months were in response to feedback from Moodle and Office 365 users, but there have also been changes due to the ongoing evolution of Moodle and Office, respectively. For example, the plugins were originally released for Moodle 2.7, and Remote-Learner has performed the necessary testing and changes to assure that the plugins work with Moodle 2.8 and now Moodle 2.9, the most recent version.

Another good example is the new User Groups feature in Office 365, which the plugins have exposed within Moodle for use by students and teachers. As Remote-Learner’s Bryan Poss explained in a recent blog post, “Unified user groups are a new feature in Office 365 that provides a way for groups of users to collaborate throughout Office 365 applications. Groups can now be created and maintained for each course in a Moodle site, giving users an easier way to share with the other people in their courses. Teachers have a simple way to share documents with their students, and those students have a simpler way to contact their peers.”

Adapting to feedback from students, teachers and administrators

Many organizations have been testing and deploying the plugins, and their feedback helps guide and prioritize updates. Mike Churchward’s January post on the Moodle forums, for example, has dozens of comments back and forth between early adopters and the Remote-Learner team. Some of the comments identified bugs that have been fixed (specific examples can be found here), and other feedback has resulted in simplification of the user experience.

In the original release in January, you had to use a Microsoft Account (MSA) for the OneNote integration, even if you were using an Office 365 login for the other features. This spring, however, OneNote has released a new API that enables use of an enterprise login for all of the functionality, including the OneNote integration, so the plugins have been modified to take advantage of this new API. The need for a separate MSA was something some early adopters had found to be clumsy, and now they can have a streamlined experience using only their Office 365 login.

For more details about the improvements to the open source Office 365 plugins for Moodle that have been released over the last few months, see Remote-Learner’s blog post Microsoft Office 365 Plugins Update as well as MS Open Tech’s blog post Office 365 plugins for Moodle: updates and new features.

Growing Momentum

The Moodle plugin repository provides download statistics for each plugin, and it’s exciting to see how many people are using the Office 365 plugins! The latest stats show more than 180 sites are using the plugins now, and there have been more than 4,000 downloads, with download activity growing steadily over time:

Moodle plugin downloads

Of more than 1,000 plugins in the Moodle plugin directory, Office 365 plugins are all in the top 10 of their respective categories. A few highlights:

  • OneDrive for Business was the number 4 Repository download for the last 12 months, and was the number 2 Repository download for the last two months.
  • OneNote was the number 9 Repository download for the last 12 months and the number 3 Repository download for the last two months.
  • OneNote was the number 5 Assignment download for the last 12 months and was the number 4 Assignment download for the last two months.
  • oEmbed was the number 9 Filter download for the last 12 months and was number 6 Filter download for the last two months.
  • OpenID Connect was the number 3 Authentication download for the last 12 months and the number 2 Authentication download for the last two months.

We’re pleased to see the growing momentum around this work, and look forward to continued collaboration with open source educational software communities!

Jean Paoli, President
Rob Dolin, Senior Program Manager
Doug Mahugh, Senior Technical Evangelist
Microsoft Open Technologies, Inc.

]]>
/blog/2015/06/26/office-365-moodle-getting-better/feed/ 0
Open edX + Microsoft Office 365: Better Together /blog/2015/06/26/open-edx-office-365/ /blog/2015/06/26/open-edx-office-365/#comments Fri, 26 Jun 2015 23:01:32 +0000 https://msopentech.com/?p=892461 Read More]]> In the past few days, key contributions have been accepted into the Open edX codebase to enable integration between Open edX, a popular open source system for massive open online courses or MOOCs and Office 365’s popular productivity software and services.

This continues Microsoft’s contributions to educational open source software including Office 365 integrations with Moodle announced earlier this year.

Background

For readers who may not know, Open edX is an open source platform for teaching and learning. It powers edX.org where Berkeley, Harvard, MIT, IIT Mumbai, Tsinghua University, the University of Arizona, the University of Texas, and many other academic institutions publish MOOCs. Open edX software also powers academic, professional, and vocational learning sites including: Blue Planet Life, Cloud Genius, DrupalX, McKinsey Academy, MongoDB University, University of Alaska, UNC Online and many others.

Microsoft uses edX as well, and in March of this year announced a new set of edX courses designed to provide developers with the skills they need to be successful in the cloud-first, mobile-first world. Taught by well-known Microsoft experts, these course focus on in-demand skills and feature interactive coding, assessments and exercises to help students build the expertise they need to excel in their careers.

Single Sign-On

With the “Cypress” release coming in July 2015, administrators of Open edX software will be able to enable single sign-on with a variety of identity providers including Facebook, Google, and Office 365.

The story of enabling Office 365 sign-on for Open edX is a story of collaboration that happens frequently in open source software. Initially, an MS Open Tech engineer made a pull request to add support for login with Office 365 to Open edX. A member of the edX team pointed us to another pull request authored by Braden MacDonald from OpenCraft. We connected with Braden who provided our engineering team with a sandbox for testing. We verified that Braden’s pull request would satisfy our scenario as long as it picked-up the latest version of another open source library. Earlier today, Braden’s pull request incorporating our requirements was merged from the feature branch in to the master branch of the code.

During discussions on GitHub, we also found that there was a need for documentation of the new single sign-on / 3rd party authentication functionality. We have volunteered to dedicate some resources to that work.

Insert / Embed File XBlock

Our contributions to Open edX have also included a new XBlock which enables supported files to be inserted or embedded. Like single sign-on, we began with an initial goal of Open edX + Microsoft Office 365 integration and ended-up not just contributing Microsoft integration to the open source project, but contributing an XBlock that supports integration with any service that provides a public URL for hosted documents and implements oEmbed.

The “File Storage” XBlock enables course authors to insert a hyperlink to a file or embed a file from a large number of file hosting solutions. Our team has tested: Box, Dropbox, Google docs, Office Mix, OneDrive, Slideshare, Soundcloud, TED, YouTube, and more. You can find a full list of tested file hosts in the XBlock’s ReadMe file.

Documentation, installation instructions, and the open source code for the “File Storage” XBlock is at: https://github.com/MSOpenTech/xblock-filestorage

Office Mix XBlock

MS Open Tech is not the only team from Microsoft contributing to Open edX. The Office Mix team has developed an XBlock for embedding content authored in Office Mix into an Open edX course. The XBlock was originally published at the end of 2014 and the Mix team is working to ensure all Office Mix content embedded in Open edX courses is accessible. Thanks to the flexible XBlock architecture, when these issues are addressed, all Office Mix content embedded in Open edX courses will automatically get the accessibility fixes.

Documentation, installation instructions, and the open source code for the Office Mix XBlock are at: https://github.com/OfficeDev/xblock-officemix

Future Contributions

In addition to our collaboration with Braden, we are appreciative of the friendly, welcoming, and helpful members of the Open edX community including Beth Porter, Sarina Canelake, Ned Batchelder, Mark Hoeber, and others.

We hope you’re as excited as we are to see this integration between Open edX and Office 365 and as we did with Moodle over the last few months, we look forward to this just being just the beginning of exciting integrations between open source Open edX and Office 365.

Jean Paoli, President
Rob Dolin, Senior Program Manager
Doug Mahugh, Senior Technical Evangelist
Microsoft Open Technologies, Inc.

 

]]>
/blog/2015/06/26/open-edx-office-365/feed/ 0
Running Cocos2d-x Games on Windows IoT Core /blog/2015/06/17/running-cocos2d-x-games-on-windows-iot-core/ /blog/2015/06/17/running-cocos2d-x-games-on-windows-iot-core/#comments Wed, 17 Jun 2015 23:16:57 +0000 https://msopentech.com/?p=892421 Read More]]> clip_image002Windows IoT core is coming with the Windows Universal Platform support, so Cocos2d-x games may also have the potential to become Windows IoT Core applications. I took the liberty of testing one on my Raspberry PI II…and it’s running!

 

System Setup

You can reference the steps to install IoT core on the Raspberry Pi II  in my previous post. Once Raspberry Pi II can boot correctly on the IoT core, the main screen will display the corresponding IP Address:

clip_image004

To develop for Windows 10, you will need to setup Visual Studio 2015, to leverage the necessary tools:

clip_image006

Alternately, you may use a Virtual Machine. See the complete setup process to create a bootable virtual machine here.

 

Running the Code

In my sample, I used a simple particles application with only one fixed sprite. When my code was able to compile and run well for x86 (run within a window on Windows 10), I then started to build for ARM to then deploy the application to Raspberry Pi II. Before deploying to Raspberry Pi II, I quickly completed the property of debugging in the project properties (you will need to select ARM before these properties are visible):

clip_image008

I then filled in the machine name field with the IP Address of the board, and selected No when asked whether to the Require Authentication dialogue request.

The result appeared a few seconds later on my screen:

clip_image010 

The frame rate is low, but the code is running! I encourage those of you who are interested to leverage this learning to start creating applications that you may be able to enjoy with Cocos2d-x on your devices running Windows IoT core.

Have fun!

]]>
/blog/2015/06/17/running-cocos2d-x-games-on-windows-iot-core/feed/ 0
Running Video with OpenCV on Modern Windows /blog/2015/06/04/running-video-with-opencv-on-modern-windows/ /blog/2015/06/04/running-video-with-opencv-on-modern-windows/#comments Thu, 04 Jun 2015 22:19:44 +0000 https://msopentech.com/?p=892261 Read More]]> clip_image002[4]Our latest contribution of open source code to the OpenCV project completes the relevant OpenCV libraries to enable  video modules to run on Modern Windows (Windows 8.1, Windows Phone 8.1, as well as the Windows 10 preview!).

Getting Started

To take advantage of this, you will first need to clone the OpenCv repo and follow the readme instructions to create the necessary set up to build a WinRT sample.

As is normal for many open source projects on GitHub, OpenCV ‘arrives’ in source form only. Therefore, in addition to the binaries, you will also need to generate the project files for Windows using CMake. Follow the steps outlined in the readme file.

Creating a Project

To begin, you will create a C++ project targeting Windows Store Apps (blank App)

image

Add an image control on the screen definition (MainPage.xaml) and give it a name (for this example, I have used “imgCV”)

clip_image006[4]

Add an implementation for the page loaded event. To do this, select the page property and double click on the Loaded event)

clip_image008[4]

At this point, you will now be ready to add the OpenCV code. Take note that you will need to reference the relevant libraries. You can certainly do that in the ‘classic’ C++ dialog box. But that process can become tedious and time-consuming if this is needed for all the platform and modes you plan to target (Win32, ARM, Debug/Release). If you are working with more than one, I encourage you to create and load a property page within the project.

Configuring the project to use OpenCV: Creating a Property Page

To do this, you will first need create the property page (there is a good example in the WinRT sample ). To load the property sheet, select the property sheet Manager:

clip_image010[4] 

Then form the property Manager, click on the + icon to add your property sheet

clip_image012[4]

(in this example I copy the one founded in the WinRT sample repo, who contains these lines:)

<None Include="$(OpenCV_Bin)opencv_videoio300$(DebugSuffix).dll">

      <DeploymentContent>true</DeploymentContent>

</None>

<AdditionalDependencies>opencv_core300$(DebugSuffix).lib;opencv_imgproc300$(DebugSuffix).lib;opencv_features2d300$(DebugSuffix).lib;opencv_flann300$(DebugSuffix).lib;opencv_ml300$(DebugSuffix).lib;opencv_imgcodecs300$(DebugSuffix).lib;opencv_objdetect300$(DebugSuffix).lib;opencv_videoio300$(DebugSuffix).lib;%(AdditionalDependencies)</AdditionalDependencies>

Configuring the project to use OpenCV: Creating the package

 

Now, you should be able to build. Yet, be sure that the Modern Windows package contain the necessary openCV dlls. You will need to select the dlls you want to be included in the package (for this sample you only need video support: opencv_core300, opencv_imgcdecs300, opencv_imgproc300 and opencv_video300)

Add each of these dlls from the binary folder you previously created using CMake and force the content property to true (it is possible to select all of the dlls and then apply the property for all of them at one time)

clip_image014[4]

In this sample, you can see that I added Release and Debug dlls to my sample project.

Adding the OpenCV code

To add code for the video:

First, declare the video capture:

                cv::VideoCapture cam;

Second, declare a global function to be used as VideoTask:

void cvVideoTask()

{

       cv::Mat frame;

       cam.open(0);

       while (1)

       {

              // get a new frame from camera - this is non-blocking per spec

              cam >> frame;

              if (!cam.grab()) continue;

              winrt_imshow();

       }

}

Third, within the Page Loaded event, add these two lines to identify the image control for displaying the video and the task function for video rendering:

void VideoCV::MainPage::Page_Loaded(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)

{

       cv::winrt_setFrameContainer(imgCV);

       cv::winrt_startMessageLoop(cvVideoTask);

}

 

You will now be ready to build and run (hit F5), and you should view this kind of result:

clip_image016[4]

(You can test this code from the sample located in the repo opencv\samples\winrt_universal\VideoCaptureXAML\video_capture_xaml)

Motion Detection, Just for Fun

A simplified approach to motion detection might be that if between two frames something had moved in the frame, the color of the pixels may have also changed. This algorithm will store the last frame and compare the current and prior frames pixel by pixel. In this example, any pixels that have changed will now appear in blue.

Declare a Compare function like this:

int seuil = 10;

void Compare(Mat f, Mat oldF)

{

        if (oldF.rows == 0)

                return;

 

 

 

        for (int i = 0; i < f.rows; i++)

        {

                for (int j = 0; j < f.cols; j++)

                {

                        if (abs(f.at<cv::Vec3b>(i, j)[2] - oldF.at<cv::Vec3b>(i, j)[2]) >seuil &&

                                abs(f.at<cv::Vec3b>(i, j)[0] - oldF.at<cv::Vec3b>(i, j)[0]) >seuil&&

                                abs(f.at<cv::Vec3b>(i, j)[1] - oldF.at<cv::Vec3b>(i, j)[1]) > seuil)

                        {

                                f.at<cv::Vec3b>(i, j)[2] = 255;

                                f.at<cv::Vec3b>(i, j)[1] = 0;

                                f.at<cv::Vec3b>(i, j)[0] = 0;

                        }

                }

        }

 

} 

Then call this function within your video task :

void cvVideoTask()

{

       cv::Mat frame,oldFrame,tmp;

       cam.open(0);

       while (1)

       {

              // get a new frame from camera - this is non-blocking per spec

              cam >> frame;

                     if (!cam.grab()) continue;

 

              frame.copyTo(tmp);

              Compare(frame, oldFrame);

              tmp.copyTo(oldFrame);

 

              winrt_imshow();

       }

}

 

You should have a similar result to this!

clip_image018[4]

Have fun with this demo, and don’t hesitate to share your feedback.

A bientôt !

 

]]>
/blog/2015/06/04/running-video-with-opencv-on-modern-windows/feed/ 0
ANGLE for Windows Store Is Now Available via NuGet /blog/2015/06/03/angle-for-windows-store-is-now-available-via-nuget/ /blog/2015/06/03/angle-for-windows-store-is-now-available-via-nuget/#comments Wed, 03 Jun 2015 21:17:32 +0000 https://msopentech.com/?p=892011 Read More]]> logoangleSince 2013, Microsoft has been involved with the ANGLE Project, helping developers run their OpenGL ES 2.0 apps on Windows.  In 2014, we helped those apps run on Windows Phones by providing support for DirectX feature level 9_3 and the Windows Universal App platform.  Today, we make it even easier for developers to use ANGLE in their apps by making ANGLE for Windows Store available in the NuGet store.  Developers can use their NuGet client, such as Visual Studio 2013, and include a reference to ANGLE, adding all required references to their project.

We are providing NuGet as an additional option to today’s method of downloading and building source code!  We’ve included a quick guide on how to get started with the NuGet package, speeding up your time to seeing results in your app.

For more information on ANGLE and NuGet: http://www.nuget.org/packages/ANGLE.WindowsStore/

For more information on ANGLE and GitHub: https://github.com/MSOpenTech/angle

Tony Balogh
Senior Program Manager

]]>
/blog/2015/06/03/angle-for-windows-store-is-now-available-via-nuget/feed/ 0
How to create a Bootable Windows 10 VHD for cocos2d-x Development /blog/2015/06/03/how-to-create-a-bootable-windows-10-vhd-for-cocos2d-x-development/ /blog/2015/06/03/how-to-create-a-bootable-windows-10-vhd-for-cocos2d-x-development/#comments Wed, 03 Jun 2015 16:36:31 +0000 https://msopentech.com/?p=891971 Read More]]> As the Windows 10 technologies become available, we thought useful to help you create a Virtual Machine for Windows 10 for testing your ideas and projects with Cocos2d-x.

 

Warnings

· DISABLE BITLOCKER BEFORE CONTINUING!! These instructions have not yet worked successfully with a BitLocker enabled drive.

· Back up your files and make sure you have a recovery disk for your current OS!!

 

Instructions

1. Download the Windows 10 x64 ISO from Windows Insider (currently Build 10074)

Product key: 6P99N-YF42M-TPGBG-9VMJP-YKHCF

2. Download Convert-WindowsImage.ps1

3. Run Command Prompt as Administrator

4. Run the following command

Powershell.exe -ExecutionPolicy Unrestricted -File Convert-WindowsImage.ps1 -ShowUI

5. Fill out the fields similar to the following and click on Make My VHD

clip_image002

6. After the VHD is created, right-click on 10074.vhd and select Mount.

DISABLE BITLOCKER BEFORE CONTINUING!!

7. Enter the following command into the Console

bcdboot <drive letter>:\windows

where <drive letter> is the drive letter assigned to the mounted VHD

8. Restart your computer and select Windows 10 Preview from boot up menu.

9. Complete the Windows 10 installation.

10. Install Python 2.7.x

a. add c:\Python27 to the Path Environment setting

11. Install Git

12. Enable Hyper-V in BIOS and Windows Programs and Features settings.

13. Install Visual Studio Visual Studio Community 2015 RC

14. Enable your computer for Development. Follow these instructions (Use gpedit section!!!)

15. You should now be able to clone and build cocos2d-x for Windows 10 UAP

a. git clone https://github.com/MSOpenTech/cocos2d-x.git

b. cd cocos2d-x

c. git checkout v3.6-uap

d. git submodule update --init

e. python download-deps.py

 

How to Remove a VHD from the Boot Menu

from http://mythoughtsonit.com/2012/03/how-to-clean-up-your-boot-from-vhd-menu-2/

clip_image004

Boot into one of your Windows OS installations and start MSConfig.exe. From there, select the instance of Windows you want to remove and click Delete.

To remove the VHD file you just need to delete the VHD from an OS that is not using it to boot

 

Have fun with Windows 10 and Cocos2d-x.

A bientôt

]]>
/blog/2015/06/03/how-to-create-a-bootable-windows-10-vhd-for-cocos2d-x-development/feed/ 0
Supporting the Windows Store Certification D3D Device Trim requirement /blog/2015/05/20/supporting-the-windows-store-certification-d3d-device-trim-requirement/ /blog/2015/05/20/supporting-the-windows-store-certification-d3d-device-trim-requirement/#comments Wed, 20 May 2015 20:16:56 +0000 https://msopentech.com/?p=891891 Read More]]> logoangleApplications submitted to the Windows Store must pass the Windows App Certification Kit.  This kit launches the application and performs multiple tests to ensure that the application meets the Windows Store requirements. 

A test category labeled Direct3D Feature Test checks if your application properly calls IDXGIDevice3::Trim when it is suspended. Starting in Windows 8.1, apps that render with Direct2D and/or Direct3D (including CoreWindow and XAML interop) must call Trim in response to the Process Lifetime Management (PLM) suspend callback.

The “Direct3D Trim after Suspended” test in the “Direct3D Feature Test” category will report the following error if the Trim requirement is not met.

"One or more applications in the package did not call Trim() on their DXGI Devices while being suspended."

What is Trim?

Trim is a mechanism that instructs the Direct3D runtime and the graphics driver that it is safe to discard internal memory buffers allocated for the app, reducing its memory footprint when needed. This is performed by calling DXGIDevice3::Trim( ).

How do I make my application pass this Trim requirement?

ANGLE provides two options to support this requirement.  An automatic option, where ANGLE will manage calling Trim for you, and a manual option where your application can control when Trim is called.  The default in ANGLE is to NOT automatically call Trim for the application.  Enabling automatic trim in ANGLE is recommended.

Enable Automatic Trim

To have ANGLE manage this trim requirement for your application, all you need to do is add EGL_PLATFORM_ANGLE_ENABLE_AUTOMATIC_TRIM_ANGLE set to EGL_TRUE in your EGL display attributes passed to eglGetPlatformDisplayEXT( ).

Manual Trim

First you need to register for the Application suspended event handler.  Examples of this registration can be found on the DirectX Windows Store samples.

Next you need to let ANGLE know that you want to control calling Trim yourself.  This is done by adding EGL_PLATFORM_ANGLE_ENABLE_AUTOMATIC_TRIM_ANGLE set to EGL_FALSE in your EGL display attributes passed to eglGetPlatformDisplayEXT( ).  After doing this ANGLE will no longer call DXGIDevice3::Trim( ) for your application.

The code snippet below shows how you can acquire the D3DDevice from the EGLDisplay and eventually call Trim.  The following snippets show how to register for the application suspending event.  Note the different versions of event registrations tailored for CoreWindow and XAML based applications.

CoreWindow application registration for Suspending event

void App::Initialize(CoreApplicationView^ applicationView)

{

    CoreApplication::Suspending +=

        ref new EventHandler<SuspendingEventArgs^>(this, &App::OnSuspending);

}

 

XAML application registration for Suspending event

App::App()

{

    InitializeComponent();

    Suspending += ref new SuspendingEventHandler(this, &App::OnSuspending);

}

 

Suspending event handler

void App::OnSuspending(Object^ , SuspendingEventArgs^)

{

    // Call trim helper in response to the suspending event

    mOpenGLES.Trim();

}

 

In this example, the OpenGLES helper class is extended to contain a new public method called ‘Trim’ which can be called directly by the application object where the suspended event handler is registered.

Implementation for Trim( ) helper function

void OpenGLES::Trim()

{

    PFNEGLQUERYDISPLAYATTRIBEXTPROC QueryDisplayAttribEXT;

    PFNEGLQUERYDEVICEATTRIBEXTPROC QueryDeviceAttribEXT;

 

    QueryDisplayAttribEXT = (PFNEGLQUERYDISPLAYATTRIBEXTPROC)eglGetProcAddress("eglQueryDisplayAttribEXT");

    QueryDeviceAttribEXT = (PFNEGLQUERYDEVICEATTRIBEXTPROC)eglGetProcAddress("eglQueryDeviceAttribEXT");

 

    EGLBoolean result = EGL_FALSE;

    EGLAttrib device = 0;

    EGLAttrib angleDevice = 0;

    if (QueryDisplayAttribEXT(mEglDisplay, EGL_DEVICE_EXT, &angleDevice) == EGL_TRUE)

    {

        if (QueryDeviceAttribEXT(reinterpret_cast<EGLDeviceEXT>(angleDevice), EGL_D3D11_DEVICE_ANGLE, &device) == EGL_TRUE)

        {

            ComPtr<ID3D11Device> d3d11Device = reinterpret_cast<ID3D11Device*>(device);

            ComPtr<IDXGIDevice3> dxgiDevice3;

            if (SUCCEEDED(d3d11Device.As(&dxgiDevice3)))

            {

                dxgiDevice3->Trim();

            }

        }

    }

}

Cooper Partin

Senior Software Engineer

]]>
/blog/2015/05/20/supporting-the-windows-store-certification-d3d-device-trim-requirement/feed/ 0
UAP in Action: Running OpenCV on Raspberry Pi II /blog/2015/05/15/uap-in-action-running-opencv-on-raspberry-pi-ii/ /blog/2015/05/15/uap-in-action-running-opencv-on-raspberry-pi-ii/#comments Fri, 15 May 2015 22:09:46 +0000 https://msopentech.com/?p=891791 Read More]]>

clip_image002Microsoft Windows 10 offers the promise to build applications once that will run on every device. Leveraging our contributions to OpenCV for Modern Windows , we have also worked with the product engineering team to apply this work to Win10. With this last branch (vs2015-samples) on the MS Open Tech GitHub repo, we allow developers to build OpenCV for ARM, in the Win10 contest.

Getting started

You will need the following pre-installed to run this demo:

· Windows 10 (I run the 10074 version)

· Visual Studio 2015 RC

· Visual Studio 2015 Tools for Windows 10

You can access these tools here and here

Once you have setup in place, you will be able to build Universal Applications for Win10. This means that rather than the case with Windows 8.1 (one for phone and one for Windows with shared source), you will now be able to build only one project that extends to potentially several targets. For more information about Windows 10 Universal Windows Platform functionality, have a look at this excellent MSDN article )

Setting Up Raspberry PI II with Windows 10

clip_image004Raspberry Pi II can run the Windows 10 IoT Core. Follow these steps to setup it up correctly

The archive Windows 10 IoT Core Insider Preview Image for Raspberry Pi 2 that you used to set up Raspberry Pi II also contains a very useful tool (Windows IoT Core Watcher) that will allows you to find your board on the network and connect to it through your browser. While it is optional for purposes of today’s demo, I highly recommend that you use it, as it is the only thing you need to deploy your OpenCV Application.

 

 

Building OpenCV Libraries for ARM

Before starting to code the apps, you will first need to build the OpenCV libraries and dlls for ARM. We (MS Open Tech) created a special branch for that purpose. At the time of this writing, this branch is not yet merged to the main OpenCV repo, but we plan to make the pull request soon…in the meantime, it can be accessed from our OpenCV repo.

So, I suggest that you clone the MS Open Tech GitHub repo.

As explained in the readme file, you will need to create the environment variable needed for Visual Studio 2015 and Winrt Sample.

You will then open the OpenCV solution itself located [vs2015\WS\10.0\ARM] and build it for ARM. The build process will generate all of the needed libs and dlls and place them in these folders [lib and bin in vs2015\WS\10.0\ARM\]. Note: This is the directory path you will need to reference to build your OpenCV application.

Build a Modern OpenCV Application

First, create a new solution with Visual Studio 2015, Universal App for Win10:

clip_image006

Add this path to the libs you just generated. You will need to reference the OpenCV dlls in your project (right click on the solution) and add [opencv_core300d.dll, opencv_imgproc300d.dll ….]

clip_image008

As the plan is to deploy your application on Raspberry Pi II, you should select the ARM Target and Remote Machine (we’ll go over how to configure it later):

clip_image009

For demo purposes, I have displayed a simple image on the screen. This image will be stored in the assets folder (for simplicity). You can add an image to the asset folder under your solution, and then add it in to asset folder of the solution (right click on asset folder in Visual studio).

 

Next, add three buttons and an image control on the XAML definition of your screen:

Then add the code within the first button click handler, to load the image:

cv::Mat image = cv::imread("Assets/grpPC1.jpg");

Lena = cv::Mat(image.rows, image.cols, CV_8UC4);

cv::cvtColor(image, Lena, CV_BGR2BGRA);

UpdateImage(Lena);

Lena is a cv::Mat declared in the header.

The UpdateImage is a method that will display the cv::Mat in the Image control

You will next add a second button to test the Canny filter with this code:

//Canny filter

cv::Mat result;

cv::Mat intermediateMat;

cv::Canny(Lena, intermediateMat, 80, 90);

cv::cvtColor(intermediateMat, result, CV_GRAY2BGRA);

UpdateImage(result);

Finally, you will add a third button to test the Face and Body detection using this code:

cv::String face_cascade_name = "Assets/haarcascade_frontalface_alt.xml";

cv::CascadeClassifier face_cascade;

cv::String body_cascade_name = "Assets/haarcascade_fullbody.xml";

cv::CascadeClassifier body_cascade;

void internalDetectObjects(cv::Mat& inputImg, std::vector<cv::Rect> & objectVector, std::vector<cv::Rect> & objectVectorBodies)

{

cv::Mat frame_gray;

cvtColor(inputImg, frame_gray, CV_BGR2GRAY);

cv::equalizeHist(frame_gray, frame_gray);

// Detect faces

face_cascade.detectMultiScale(frame_gray, objectVector, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, cv::Size(30, 30));

//detect bodies

body_cascade.detectMultiScale(frame_gray, objectVectorBodies, 1.1, 2, 0 | CV_HAAR_SCALE_IMAGE, cv::Size(30, 300));

}

void RaspberryCV::MainPage::btn3_Click(Platform::Object^ sender, Windows::UI::Xaml::RoutedEventArgs^ e)

{

if (!face_cascade.load(face_cascade_name)) {

printf("Couldnt load Face detector '%s'\n", face_cascade_name);

exit(1);

}

if (!body_cascade.load(body_cascade_name)) {

printf("Couldnt load Body detector '%s'\n", body_cascade_name);

exit(1);

}

cv::Mat frame = cv::imread("Assets/grpPC1.jpg");

if (frame.empty())

return;

std::vector<cv::Rect> faces;

std::vector<cv::Rect> bodies;

internalDetectObjects(frame, faces, bodies);

for (unsigned int i = 0; i < faces.size(); i++)

{

auto face = faces[i];

cv::rectangle(Lena, face, cv::Scalar(0,0, 255, 255), 5);

}

for (unsigned int i = 0; i < bodies.size(); i++)

{

auto body = bodies[i];

cv::rectangle(Lena, body, cv::Scalar(0,0, 0, 255), 5);

}

UpdateImage(Lena);

}

All the code source outlined above is also available as sample in our repo: https://github.com/MSOpenTech/opencv/tree/vs2015-samples/samples/winrt_universal under the Raspberry folder and solution.

Deploying and Running the Application

Once you have built the solution, it will be time to deploy and run it on Raspberry PI II.

Boot Raspberry Pi II with a screen connected and a network. In the start screen, you will find the related IP address:

clip_image011

Next, you will configure Visual Studio to target Raspberry. Right click on the RaspberryCV Property Pages to set the IP address and select No when prompted whether to Require Authentication:

clip_image013

When you then click the Remote Machine Button, you should obtain the same result as presented below:

You will see it load and display the image:

clip_image015

 

Then apply the Canny filter:

clip_image017

 

And observe it detect the face and body within the image.

clip_image019

 

Try it for Yourself!

You can see with this first simple demo that we can enable OpenCV scenarios on Raspberry Pi II. At the time of this post, camera support is not yet available on Raspberry PI II running Windows 10, but it will be soon to enable a variety of additional scenarios, such as real time video frame analysis…

I encourage you to try this sample out for yourself, and I hope that you will share back about your experiences and/or contributions to enrich this sample.

Happy coding!

A bientôt

]]>
/blog/2015/05/15/uap-in-action-running-opencv-on-raspberry-pi-ii/feed/ 0