<rdf:RDF
 xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
 xmlns="http://purl.org/rss/1.0/"
 xmlns:dc="http://purl.org/dc/elements/1.1/"
 xmlns:cc="http://web.resource.org/cc/"
 >
<!--
	This ontology document is licensed under the Creative Commons
	Attribution License. To view a copy of this license, visit
	http://creativecommons.org/licenses/by/2.0/ or send a letter to
	Creative Commons, 559 Nathan Abbott Way, Stanford, California
	94305, USA.
-->
 <channel rdf:about="http://ebiquity.umbc.edu//tags/html/?t=risk-averting+error">
  <cc:license rdf:resource="http://creativecommons.org/licenses/by/2.0/" />
  <title><![CDATA[UMBC ebiquity RSS Tag Search]]></title>
  <link><![CDATA[http://ebiquity.umbc.edu//tags/html/?t=risk-averting+error]]></link>
  <description><![CDATA[UMBC ebiquity RSS Tag Search for risk-averting error]]></description>
  <items>
    <rdf:Seq>
      <rdf:li resource="http://ebiquity.umbc.edu/paper/html/id/930/Training-Deep-Neural-Networks-with-Gradual-Deconvexification"/>
      <rdf:li resource="http://ebiquity.umbc.edu/paper/html/id/932/Convexification-and-Deconvexification-for-Training-Artificial-Neural-Networks"/>
      <rdf:li resource="http://ebiquity.umbc.edu/paper/html/id/933/The-Normalized-Risk-Averting-Error-Criterion-for-Avoiding-Nonglobal-Local-Minima-in-Training-Neural-Networks"/>
      <rdf:li resource="http://ebiquity.umbc.edu/paper/html/id/934/A-pairwise-algorithm-for-training-multilayer-perceptrons-with-the-normalized-risk-averting-error-criterion"/>
      <rdf:li resource="http://ebiquity.umbc.edu/paper/html/id/937/Overcoming-the-Local-Minimum-Problem-in-Training-Multilayer-Perceptrons-with-the-NRAE-Training-Method"/>
    </rdf:Seq>
  </items>
 </channel>
 <item rdf:about="http://ebiquity.umbc.edu/paper/html/id/930/Training-Deep-Neural-Networks-with-Gradual-Deconvexification">
  <title><![CDATA[Training Deep Neural Networks with Gradual Deconvexification]]></title>
  <link>http://ebiquity.umbc.edu/paper/html/id/930/Training-Deep-Neural-Networks-with-Gradual-Deconvexification</link>
  <description><![CDATA[A new method of training deep neural networks including the convolutional network is proposed. The method deconvexifies the normalized risk-averting error (NRAE) gradually and switches to the risk-averting error (RAE) whenever RAE is computationally manageable. The method creates tunnels between the depressed regions around saddle points, tilts the plateaus, and eliminates nonglobal local minima. Numerical experiments show the effectiveness of gradual deconvexification as compared with unsupe...]]></description>
  <dc:date>2016-06-24</dc:date>
 </item>
 <item rdf:about="http://ebiquity.umbc.edu/paper/html/id/932/Convexification-and-Deconvexification-for-Training-Artificial-Neural-Networks">
  <title><![CDATA[Convexification and Deconvexification for Training Artificial Neural Networks]]></title>
  <link>http://ebiquity.umbc.edu/paper/html/id/932/Convexification-and-Deconvexification-for-Training-Artificial-Neural-Networks</link>
  <description><![CDATA[The purpose of this dissertation research is to overcome a fundamental problem in the theory and application of artificial neural networks (ANNs). The problem, called the local minimum problem in training ANNs, has plagued the ANN community since the middle of 1980s.  ANNs trained with backpropagation are extensively utilized to solve various tasks in artificial intelligence fields for decades. The computing power of ANNs is derived through its particularly distributed structure together with...]]></description>
  <dc:date>2016-05-01</dc:date>
 </item>
 <item rdf:about="http://ebiquity.umbc.edu/paper/html/id/933/The-Normalized-Risk-Averting-Error-Criterion-for-Avoiding-Nonglobal-Local-Minima-in-Training-Neural-Networks">
  <title><![CDATA[The Normalized Risk-Averting Error Criterion for Avoiding Nonglobal Local Minima in Training Neural Networks]]></title>
  <link>http://ebiquity.umbc.edu/paper/html/id/933/The-Normalized-Risk-Averting-Error-Criterion-for-Avoiding-Nonglobal-Local-Minima-in-Training-Neural-Networks</link>
  <description><![CDATA[The convexification method for data fitting is capable of avoiding nonglobal local minima, but suffers from two shortcomings:
The risk-averting error (RAE) criterion grows exponentially as its risk-sensitivity index λ increases, and the existing method of
determining λ is often not effective. To eliminate these shortcomings, the normalized RAE (NRAE) is herein proposed. As NRAE
is a monotone increasing function of RAE, the region without a nonglobal local minimum of NRAE expands as does ...]]></description>
  <dc:date>2015-02-01</dc:date>
 </item>
 <item rdf:about="http://ebiquity.umbc.edu/paper/html/id/934/A-pairwise-algorithm-for-training-multilayer-perceptrons-with-the-normalized-risk-averting-error-criterion">
  <title><![CDATA[A pairwise algorithm for training multilayer perceptrons with the normalized risk-averting error criterion]]></title>
  <link>http://ebiquity.umbc.edu/paper/html/id/934/A-pairwise-algorithm-for-training-multilayer-perceptrons-with-the-normalized-risk-averting-error-criterion</link>
  <description><![CDATA[Proper use of the normalized risk-averting error (NRAE) criterion has been shown to avoid nonglobal local minima effectively in the mean squared error (MSE) criterion. For training on large datasets, a pairwise algorithm for the NRAE criterion similar to the widely-used least mean square algorithm for the MSE criterion is proposed. The gradual deconvexification method employing this pairwise algorithm is tested on examples with built-in nonglobal local minima that are difficult to avoid and o...]]></description>
  <dc:date>2014-07-07</dc:date>
 </item>
 <item rdf:about="http://ebiquity.umbc.edu/paper/html/id/937/Overcoming-the-Local-Minimum-Problem-in-Training-Multilayer-Perceptrons-with-the-NRAE-Training-Method">
  <title><![CDATA[Overcoming the Local-Minimum Problem in Training Multilayer Perceptrons with the NRAE Training Method]]></title>
  <link>http://ebiquity.umbc.edu/paper/html/id/937/Overcoming-the-Local-Minimum-Problem-in-Training-Multilayer-Perceptrons-with-the-NRAE-Training-Method</link>
  <description><![CDATA[A method of training multilayer perceptrons (MLPs) to reach a global or nearly global minimum of the standard mean squared error (MSE) criterion is proposed. It has been found that the region in the
weight space that does not have a local minimum of the normalized risk-averting error (NRAE) criterion expands strictly to the entire weight space as the risk-sensitivity index increases to infinity. If the MLP under training has enough hidden neurons, the MSE and NRAE criteria are both equal to ...]]></description>
  <dc:date>2012-07-11</dc:date>
 </item>
</rdf:RDF>
